var/home/core/zuul-output/0000755000175000017500000000000015140062723014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140075475015501 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000316350315140075337020266 0ustar corecorezikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB…~x6b}Wߟ/nm͊wqɻlOxN_~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR׏!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!f;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ' I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sޕ6ql?N/e1N2iM6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@t޹na4p9/B@Dvܫs;/f֚Znϻ-MHVuV_K2k*`cKxuBG&24T}Lai 0Va(7K#ӊ!,ZDxFQO*lם>!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d->'I`ػ޶dWPb dNhy /5t,9[V1b&D.VץJĚ_G1( 4Qep%2TIEo$MQbJ-4_3?T5Y0_r㇪ H` Q|GybTДci UQJsrqr ) V1*VuQVTaVܞ%~CT^0DV D$ғdR<**"tO bq|bg_VBcVf2ȯS4n|?yuU!G0CSY! D?lI5L!pG)e?vg2vffWNI闷:%e'ڟa44M__D$ m|Qx ؋@L[+ uq \ݟ3_WMuqAhx`mLp!Ե;4Ԍ; YѤ5ÒۆmIXU͵`E˳xCCGΙ&p %m*uW"&0^/G9wÁ(c>7,8=YDHB&A5zj~TqׇCޖfd9(@lLZ"**zZj-/Q(~d'$\ay-,L[ t4-"l =j9h'8Ydu~DAH ƞ'#Eq:j)] H-muYY iyS<#,ha:>ĭ'k /5tIٯ14m͚VHY$y.E %S?`TbTγ:#"bj.F]r5*s4A[lћ/JZƣyU=YPN*V^fZj]jWt}ry>MEp\k GlfH(zXiU%p)\F1O̧T%2e "Z0G7Ln:m &^=U!uE6WQ:}8"gtg5_XTi`)Amđ|Pd yt+I I.WP6v L2rQDybz!Nf;5H#z؈+wt UˌexgV$/!i\pD |6٥:>yH1'سw獽d;i=Hsrm@$/~c4'A- (0Loϧ\cǧ|H1T= ;͈oU EvLYڼBy ,IHY)ޝG$dZmi[cEgxO&w]J~tAmPm G+ٺrΧ)yrrM/~$ ڼ@W8ms($N)K!V>i☦ͱFӋݛT~pC`[6i!t*"_y.gLcHD$6O*Kޯ:GB #Ppՠ,o]g ,3XiX"C6 Ez+E "Yx!n>}=Urwl/vDzO{g9NmD%._sTT"ظnwBtDFpNK 8HTof n$K C1.ג6pqE$=Hܮ*v#y],.q"?Z3^(1l5%-|Nlj78py6#шFBPS1؊"ޜ>>ELU\áxRH"K&𭫈Hq\5)=#.IUq|rJi|u:aozZb ŭ&*8>j_8Mh!WeUjh_퀪6 )~\%ZVsm8,+riX4Q)kpu8knő>Q`zPZfRwe:Qw[8gW`\nTfp=\RG8dj|Fjȷ'AW> (nH476!ݕ-e(s1K,o DW%J Z RWKlGob;8im *7P9V"--dw| چq6t)Tx7Zkdqjv" i.>UgE)~xHPŜl{>Ui$g2n<0O/hلs Ge p(p]о Tu F!OsWR`CEF4 wG0p{Z"Y5]; ic%Xm#nZU:ޱ)tv>z8dOʷQI`i"7>KW-VM+E4+Z*JotAzl(EqZ^XUm byr dލ-nܕ4()a=LU#Ze_jOm ͅ)dY]k!r"С5W*5gQ!Aj ג j(1Eg"R7ד(w5TRЭ b~ &deDfjݭ"À/R{mG AZ[Um:h:o CVÏGutcYͣ\gt:<\QEk89*nqWR8TՆۖD$LƤ}Vıl3pQ_c{;vV-si6 L)oʒj_e~+(8.ّ>emՔ aXM"1]U|.}*Oau4QkX j7}<,Y=hwUMۺfڪե{:D u9l+tˤ C /PZ#^OO*<Ȑ2TFʦRc2Db:ﰉ[Ubӵ$XQ决,*4TR* }#8v>>Re[+EӺk~.&Yk kk ſ93$i# 0d酜Z>1-;:5x!5Bu=+M3p]ƥU`wMKC1]q)HݬQEyl='*rK"VI@Cmu7{MF+TadeO&PptHx6x*nC!g?`.xLJ𸖱#ƸnS{M[2ԩvH\2ϵvkS80>}? 2Z gAѦe w=SUs;4᪪.Wڂ蛪zK۞v^U})4K n:ξOa,R;p*+B^zg"CSUz)EdIr,;*" :{Hyj^U>x]=Y}?>Z(p08>jAϷP]NG@&S``qЪ=:7ba,}rwbyyZ[hRƜ:e;fuSf܅r[94s7ū:g2fIdž'oD>(ǦGo:7yMߣjg; *.[ʲgtzie5gSG )+QkCjs<ٲeڕZ #G$VD g\|ZrOMA"fa5V[m*ָn(>oL|V4s+8{43:ݺ)A5-o)1jt̐[^e>5X[eXH1ZFM&[R1߫ZԴ탚bCM<tˏ4Z Nj"Y<Z.pxeZN[M-ƴSolԠqI1q]F-aIС |QF#Ҫ59[pr~yL?TI =Q6DmDӄHYgϥ 2o3j \*L"E̙,܏例 D=@-Qw5mJ_!=&%R_C6M6m^t-Ymm'͐,'FŽn%~f5Ù!4O(n0AKE8׻I|pr.pث_؈wm737Gu/|KZ+   }赩D8#e9 $FnGZaiAJ{d6k1oĥrc웖0$V?2]t%xy?|W>*qx{> T6-2): gO::C?͒4<0ڏ=:=,tC>cȤ( H~KxkJ2?UZEf 'EC8 3wO?+aQ۳;l-;.}ek҉ _W\QkַY_#4&_ƻl*8+ 㻃=?t_wrawMBP}mcDֈeX<Lj P_M| X;vv#p!/2j?⊎F'0"F VS}}ii@#p4Ķ|{FC+`! ՘ĢۚqhDk0YmcZ4҈nhPG;SuqlҺ[{xĠs@ :!j-F!T=Br;`P4f @T=V'+Ʈ_' z(p_;BAhn?ūGR xq _ĠPjXbEwAD@ Mv APPڌA(:/ℊ4V]18K9Bim+ahl ݱƬA HGm/`<5G{bq]Y]ޥհ}FqO[*Eݫq3mZqBNkRDqHKYG܊uo''/y-F֦׷nz.pEc+5/k,C|ƸuLY} A&( !3BA1_Aӹ@A5[tu;W*ԙCsH6gXW fiy Ӯ`u%HH0v$F0C \:j.ouIDu/OX)' ail!&M"fwݷ!-ZgSx| ˳&,#Z53QZ¦n"-D'…XE!l0p)]?.AX܅;$l KK"l u3-1n$tiϭR\RG sbkBvt :(ݕxd5r ;\I+ 1_:x뗇zq0NH)<$J 4JGt2vZVs.x,`a( ^H|8 J)#Ӭsϋ0ӂh7Ѳk]H3q4zpC0Os3Q4In("3C(<,i]԰C|!.mwnmZZڂXk{ZjH*j?P{GB-嫄-ʟG(ߑPbPb{B;* Y%قPg{BH*>PwGB-V ۞Pyz;mAJ#ߑP K8G?x%,+t@fJ#$+[}e~@Qc+ vw1.p6M=hP&"r:`&rίŜDNe!$"I< Uu DsK&q[S[n_оOz΅HyNcQ?%C$`EBާŭTe>μAYIyA?9/贪;Rpvci`h=Nb,Ϣ&*j=Д\?qyyz:hFPT5AenN`5 q$R?}x+*)<]g ܡ<7av|^-$Ds2 -(7oFwoF(9$嫁4J4OP~ZCoi*ϴD?ܸieT̐eCd?\R6@xT0_=t40[΁yjIFb m\V&^Z(>0q;tzbo 0>"!־~/5:g7eM*f{JR/((Of0aѵKK Ƀ*`Z8|wb,8d^2 W{WJS(ե`<͡, D:B?D-@_oZ9:G\~ zX|ɔکTEtUژeg$ N 0!*֬څ=㳆J'}!;jy R\A-#y)[c\~kwɜl Sq|(H{"#$tt7~Oc\?eMQ GY-6njhP)F>t>'!3#A_H&=÷ۚrhvڿF~eM ^Mrw=Vzo7UKV,*7t)Kq$3<5`6 0+xyti \>/P,^2i%8kА/<:* ~GS Xz INi, RN{h 3+0np uր7/y\̒Mw2,Hkְ4 ˚YlYb- 3@\u_IJkETR| Bu~)zXXRHrsLA+[6ّCYSNE1( B)eywr0NKԹ@U[q*QJFT\<6 y~G-I2^;K>e+@V.3~.xip@S b6dzmY7J)3_t_*P[;^Ib6P(M0JMq}W`ȯ`ap~6|z57xXzx8eR?ޙ,%LQ,LC$oP Έ8a%7(NIU>[(֓np@ =IllYI@e?AD|v.OQ<g 14O"Xj_AN/u| "l^zQ\Toi?\9*iHXnLci .n|1[ѳCyƑZ19^>OD=I A )oV ڄNM;kETCVݒᷞ%?ٵNm>_ʳ~}{ǚ{@9>=U./}GtExs߬` |#fx;y{\3EGps &b#T`S xڦǩ-Ө:<5ԁpxN!VD)#e&LNV8vOs絧/ o>r{8|> SNl&#l!2x3O~b<>!ꕟSO,W9hX:!|2K**?7?B>]6]<'J$R_M_f?zDَZ~9kAf;%Bf`h.{ZE$;9J#yϡR:om=SO}ap>\v^1;]MN kו}@&7(O?N9jEjvfcb:[ÒŠˆjÏGv>aĜ')颉wکeO;=4U8*"i,ZX)4# ]JlS2W7mr&.صm]`Y8s1xބp #VY%߼Rh{CVZ )Km,fMn9ۄ91Y3#vtIҘpV ͳAՊ\ T 'F !h~-J8m^;#XR/aJXcs F ?,a5 tvVh_';rL@PYhlfŚ(ysaĜ8b;/K [Gvħڳi.ȃq_=Do܅vjRaw~h,!(-87# |$8ڱvĸȚզ .z+z.2AH E\io`UhМXFer|>FŇ%5)Gf(ΈwL+^XpF3WN4Fʼn&Qlb%%0 R:XSH3/Ilj$) Ѻ2e#[8e"mk11pag4(K&א7;Vj3ZU+'%#KW)80X}xf]Cd "BGؒK9YW d*ZpDê~zy]$0i-߬)V8u2t,PrFTG uk/7G786I7+eLsoW`L>$2HGzֶF|LLi8Z(xV4rܱH"7/FȱbgT/=8u"olaPi0#vp=>iЇ6Ff2D=3nr.5c$} ]U`r$"-EzE<}+Xkêʞ쒷#F/a[Ht#·jC54h>"*h޽ػ֡W~iܒ2KVA`D∑tyM$8^S`b;xXJ2̵^D∐t٘^nIp#G+ J,ܚ᎕$ nr4Vn&F#Bw$8*sU}"(UdZK¾Yuj>7a0 Yx4 ?KLϙCgO L8T͆IE3qn,x̝?Ew$8nQ9a1c)dZIkm1*Fҩ&f%3v×*$O?tޤ3&֙ΡV:ZD0.*^:T_on/1\Yo&2F8Xpՠs<58Y 7n6BL S-B8Yձ>U'pkLBD$F yuDhebh3; i FiWbl[NKo aɱZSQ@tyl0ܑU`oqpG|cElپRWVk%8x=]ղfQfbe4Ȏ1$ *W (, U{"-|Q(DH:8fAa!+ fkiخڧȼ[%˝QȞ%$C;viw1HvZ' `Z7nx:% Be171$ANc;9&mbphv>݇vlBc+wC L|Ɠq8=ip23/4I>g6JWbcr3odz%1=>%xsCK}i1UK j!z*Ū xwnH,z4 Ħg?cV Ze\`Ϋ+Q,685Fa~ޛ-;_ވF3%qכeOCfo_Dc7N&&೙6\`pRAH`Gsr*1 \eE)AVbn$B?(qȤ 8~UԢB^yOe0igU@W`=V\W7zt71[ .T,"JrDb$]N{oHp4f ':UL6'H ͉蚊%0e^1kDz od^\IYjg F/Wuڹ0 Iݢt3I[>v]"[}?y^m0ڭ6uϙKbV5#0RZrs ) @vuB_ӏzY>}c|M;gc+8 "Rx%ΡբTpmZ6xeԧ&}ߚ|p1urkUG.3H\/'`OPD1Yexl.ꐒ9t.3 N}"%ݟb0;.DŽJaNcwN--=/<̶`}{$8sUɣj2G;5{\RR*RHRN8x_y15-f*Q7c&$U\qwUqF}}Ѳwc(;ΖBz+lЄޡ\[6gBQ9p!\wf y\:Q,͞bƯ$y$ n q T,흟bz.A=z,]4˷b%]}8>6oT1̪tL97V]hĊ,ac$]6IX[6pY., 23-I3g!ف.A5$F҅w~}q`S7fڻq@Yh$5q(GR8dt6BC Xs_fb +?ޮIpY͚1{Fan<`.Ɨn7;2.]~6iF5 𑗁)?xEc)\o2py.rB҈>n OWߪ@8҆Jwΰr>r,-y/@T>!޴ Kө+Lgxq/Ku0n~>XVz㩕Nw(pXfࢎ-ĴG@U'#d(`ID88)q O=Agc yr0."f= 3 K@|ȷj ysMwK?2=.O;2~@>J%?r8s]^ )y@7 *;8M4EtY."YHb *lHy$6UlkK$8n<2Ѡ$ъL\#Rs5)nj?u59~cGY #܂1>üy} Λ<۠L׵td1  :s1-,w! = $8]J4 ې&(Ml"q`ب$x N[ly SŠ1 v0h6"֞QUygC9S J{U}v碧:H`&R1WL з ۨ=洴{u64^ߜPy~^iH$suw*lb֛sm(f2tS#)!`0e1]Zu񻷦mHy9#8q(J7V!deD:M]J)M?7.q^7T4 uN8:~1]Z;3s5 ;YPbm1#.;r7&t\Xy+$٥wK6Yۗa8f(rp5.LEk?$'$tuq |<O?B ax܏~>a› pttVQ\"3{y| oSpQ" l3%Y. y:GN5Ei즕\ ~rf73J\Q!Yi9|2EAY| Brt0}+f*7w|46FYIr9W9?ߍyjF_`63\I+eQDTn?OfY8#h0|b'׬DVĮ$$,К`Yҥ;^~8/#/um# R@T)߂G.h26ݞT=sz=NCcC8|~6;+l \/̣KsW0o?!8Vތ7>pG14d1sa%Ū_wpBУI:ʼ$17S(0J p ҍlT<ɲa>ϧ$Ņ hx#; d|\}OJmhgߍҫt~ ld1W?Ea`Ds;:ƈKt+?'GpYwdSc +P_Ɨ&,Gl6Z{}@4+ M"Q]q ;zh(F9iL'\6Vf^<vW-Iޙ àou9 K>E颇8.fz2\S&S> rLVh ޘX*0Z\YY~YJ/|ypHeFn?4T]fb3:~0nҺǎU,,~q*r@nZ Wk)bNVVQhwmʩ %e+þN=s4@iMf|8u)ɀgĥgܴ,,%;FS3<\(OJ;)ΗZ ņpUq~p>Kgޝ,7I&.F!/}]]h9,ji?~% 鏠gg0ϧPӕ۲ &S2My"q﷕w=YO(G%'~3!76,,WyYl[3kB9'b8ңSX@\t_rN ԲJ1Y@:`(wУ xg[8w_p\]A q]+Y>i OYCS+W46P'm`F0߷U綑xr{ S#nT3n8;IAb#mGK0(&=kzziقVdW =Qg:@aUt`f~Փ>~"dpC,!9fa )+~Z=ߦ)CනA͐Ԃ~i!jiA֬ߒ#UXTCfUxQ2eUYV!"gP8v5x LH)Vh0VcG#o2?l}}NGhC)K!{J;G1'J!xNYa{Sݞ?3],e)bk3:0h&v| ѹ\]#'¶ov`4ys;&̻rTna`YW}q"5w~]NQ(J:IXLp<94wH)ʭhorhn=w,LEf#qwk>G"l(p鮇f_ǀnY!(P̽@q8\P,?\- ?<ru$#Cl68ALKuRʦ2:Z~xtj4*Үpvyb2V cfC{/˺*#{G5bͰ{5>vb̨K |%}`natp"N8W;cS^hoSX&yeklmދyLpawO\Y_.pt6Z~^;muW7"e쀑"EX3:MȝvkG+hrcS}bwxO,'n@mx+ z&b*&/5^m$DoH+DŽ $bߔkhJ \|8[@55U3hJcи30JF%uߛ,J_!}~@ˑ/ω p}:Svf$^R/~+H$S !g^Jp+ko-8~ x_cec7n~y ba(F߯@_7"L\_qiA8LcԽ{uַh0潱ԗƊ`> &WP\):_tB_|J)H KqX]Orn|u QKn8(/b :s۞AHI4wh`Qѣ:x՛:xW<5f=D<ьcZ_A I/i +/v K&DʽD Z6M具 mQ\pq6J(^̲Cۜyf$vN) %l=cIbB$2X^Y0;Q\1w?.(Inȶن˦b5.0*_NsJ] Qtw4cK|W [@'YVqPk#%8ff,gGt$z~*jw*%ćLwRt,I!%F`kNtL%{K8Xc a 6NQьmPU`D(>8Xq4VX@bDd+TmhQ+h5C .Hu\2-2c\ߢ(Gpϲɛ\<[~,H.I}0W~nºq4gݯFU:?}LF B8E-ˣ-3)׎CK0(\K,-f /"з GxK% =J\p}A\J *܉%d-r6qyVJ{o>;3;ɠYJ4fduFnFV=-aY"w(ŪAye2G/Ns `2 V|Q $8+l훹pB9pɞG7FiZNGiA#"ć/?6-}A Q~artD%Q,hj>qVy 8L?o,s'[gJC֥',f(x |UxGj83H)MbpU< X'JʔwVyi& u4ulBsl]r[>m3@pom+$um Pv=-2,=ցb|3 f^\bc%C M1cpYlL^V|nF)z{lG#7FO0& n?xrqqS=)U0ߌL5ckTHeVcC;@Qy}=9[ O0'^n"0랳eiͤr?]`kdT[a"KF?V-fmĕkdC8) |0%Yz+ut {0M/hB#eE(%mh:2cmn}T\0j@20e*TRny::ܶ[na(3s'Q[12X"I 5inEM+R 9v(!(D L(F9E0NS SϬ枵g@v&bҘ)x2awb1&eͷd6T4x?/¹K1DHK鸋N%ɘU,jxnᢺq1LM\ R!k;ử|=|a1w"#N,%&`$,Vĸ`@8g|2iNF{wS2unk=m "݃ԢF)e(XKzDZP0Fhu]heA#j S!m­13Fg*T[#"QvcEKƼj7_>W֠0Wϱ1^#@~OG?F82=-\g(X`!ʎ3,I1rƚB\#beaZ $wJ: úkлu\r+ 1W≙O,bu!uVn~ƵS]sPA+/ dJTꔺT5GAV>lH1 ?u,${WCms,FBb &;kcTok"p|5covFFh ) i"&Qm&|(Nv֗ܐ͓;i&$BVh&*XSAa.{up%P$f?Ӝ> cb$*eBa{tLq'"c;p =o HS`5II+q!5$3%:J#z`BzWH'?;O}yiX>2EBO)xF%DE s`Y 1]"гOڰJyYJpX?,2`1 2DC޵m40Ly|"MצA​`I.c]dI%nfDْ#)n E7 ʤ. Ei+K,x.XWR+%IOXcnRB*jHDjNcҖg EXbrL hX֌}ktQ&2lkxj"m*`!NSmVHL),D#q PTLe 4~V@5$5*&biH5ɐ2e(-E>ߧ: k$&u1z9cpeM1-F=$ѠLIQc5?<(5*0CP^܅TuF o|' €IMzӗScَXN]63̿!wJA@E_V-`^w;^!7kS ,?s4MF?olf'v/bGkN]J/X9YNqy^etΚz{[>iY3j<屮_ЀWx$ȩG=E¡G=sfyFWx5\iQV=bo߷y-vс{c#Ϩ C`!+82TJUi.2B wv 'jЭ#A-]L81 [3RI8U^f,31J0iibx,Hط+3FkDka,J8ÂjXjZ!Se)3T3q~~XfKqg=(^Zc.sJ%˲,@獕:Z61-Eέ(f܊U=d8.Y|FiZiS#,0ݱWX .auUk:$;$yb0W!j(LVv-L'}&0 lxRwhQyVc:ipxwz&xK&\6l挬b<[ɢY'{Y%3)i-A+JNk^; >/ /L/owr)v6y2mޞliu͡j-@ھLF5NTZmq&"Ɔ8$&+7) Y`bIBuKYUl*OSbCr) 83[3WOLTZ,%B`N-U!o59bV!jbŰ8r$˘I%oMnZ+'1ЃSf. NP` *ZOR0-o W k-i0߸Gi-V7,xAjn$LDK\*HnpCDE+¹x7Jӭ%@nKd*  QEM3X&&:VBh#rS˵il7@bU>T[PBJpYBe*qbfgp%)kgLaMUfh‰UnmMn׃RZ0k?FXý&\t`=uK,C_ p s)R7h#)%ZuƠ9ZpwP&H#CB""J{3c|% meS \-[>ިlG U1z _LeXOJ-=Hď*:EKLƎ`v`.۞+W~Dc)aWYVϮHQ}i$: Ψ3ʮ3.ּ_ M%#ږ:0݌[TwF%83T#t&'=\rc%[n1rZL.PE7kmփ+[v3S TCxri +po]FET0 !F #'@+ﳚU&ʕ"{Q 34gWWfr׼߷z`T|py- ӓxJLkvԙ0(,=bֱ-}bCDSďEaf:0ɚݢnSbo:۠,=^qNLb ܯI(h ~4LǕv饷fn ` -S?LAp(.U,ڞ50<)d-}oW;: ,4ҳFC]!A~xqO(!^kvыl؋oUkh;$i7|`VyyJI=܎^UtK5Nj7MWEMw[ݠa5fsR-~7nNj*Kšq4ܝ^zI8>}JdOIL+SK;}Po_:g1{5zF-s[3g -WV{𵆨u\I:yq]"ۤ fn WmʀZHpF|xg_VQ&]c.fʇ J-w5,.::EͼGe;^},֘t N C˴g%x4'EEoDGbJڞgMlϦ'M5遽a"z\PxbaA '9X1]??\o{_ʀ+v~QJol>Ly]7z쭺s32 x^*RɒZ0%Ӷ&\E QMA{q$Fʚ]=..هanРmu`_e,PMzvt +rtwƃ ∷"VZS#!-G8nLT2^(#],}2 2'gfr@}!-ٻ'6a:P^{Ҁh0K]޸~㲌 0V)RyYwLZ0]zuUNu2$40;p4@d  FvamhwGܺ.}0)rT?,Nbݘ0_᭿|xo9 G19|==)~)wz粌&q?M0 ^Ef8^fʼns:ܮ"E(mW2t? k6\}R38=Aw鷮WH X+ˣz]3)-3 V$C40Pv*;`lI]Q[5iXY0%Sn\noʒ^ W} 󖂣Z05)(m?X=QjwSX=nIGK4(1ǨI4d `rp%dJ`ٺɩWvh #TnؠC<^bsl#[il2~M(9沒;rcV]֯?vZu w&\y=J ?3БY f!*ȟ~T_.yF6W!ضsCO۳G+ݿM㩳yh"kl0s@UIfko棁 -??[BcƲq~J?Okͧ(| Vx| c+$9ФZϦFjfƣ桴 ܘ ^bѮ L07hhy҄.]nVjDLe >a{'w֩w S̟x e%Ti?P]Pn U.zUrixo33mP ]GמWdng%pt6xAˠϳywcq\&3ΖY2;fҳ^nBp }agJ!RTf1x_%YIRg6NeŅ4H篻8rqqXGȬc]}O^>r;}1WUnh]WMovף;F~bIK^ / !mMn:|Ãt6(Sɧ&f3nC1sE\l>'ks,BX7(xgԇū`c zw+[ dX'? X,ǂ٪93N-mQ2Yn-ű1`1NSJF[k R&,ɔH5L-1x0/^Yp-eg?tpylOT{۷~`5]JfǾ ě\wi ׹Q>Sg%l ջ^of!o6}6 ?G7j@kB Y] D>X/EO0ߟݹ8_yry`1p7ڪ]r>s2ޱVMs#$SpU!\$p`t2%M}S=>~.,m٧V׫商8Y )]FWU[i?aZs,ܰD(ECfP&;g,3}]!J?{ƭE~j_m}$7ы,|J^ْC{mM'@ܙ3Pq!iTk$Ӎl*pX͜64 &g/n8ⳑ쪠]P{_Ɨ2oCV hoMHl8l's1pgw 5 Jb}V+OZl BRE;D t0%tpDkL.q~=i뙇\7w?'؞iO'V#ۍo$V~x<K"D&/H-'Qnɩ` w> C)Yr\4Fpe\+}Ckfj])/ v;tܙ݈e:l}fw\<k~TǾĥ,~M8ߤv/aOX;* B[! ;>bM1 ^!F b &+x,C 1BWbBW? PDžTN× ؛7]c^oZt4)S'tKJSnM4’^5F$h5I5|(j-' | f黢Qn;F &čQ0ܴǵh*Qɘ%668/v3-,Wjgv'9p0:Z8aU8dsRΨ|]ljf YM_ϳfx_/vtpv=e?֋#XSGSn`/m=w1;ioFRʓBRVG9a=JQ(OB0&~Jdu$[k5M^h'tLƦR 4EUi^gLlGXFR3nm_? EF)>?4uC,0e]U ̂Hv%/ִ8ԡkWS"|  ,Q]΅I^8wVJW`8=ڌ:JY^=Ka}(w\:׼-ysSc3:>71 ك7MX: ^3kld j"xUkA7膴S4zwpve0m' tubnx;+onkʛژb (`\$Mج`_5!2rĀu ݼJg#wom-{- ?kHgmmv羿;b+ޕhT \#D):/qs"vE*,T,3eb72XfFm@Rr*LI|k.e+ radLj9KCy>a=cr,Ì 0cf,Ìe 3aƥe 3 ,|PSL0aF[ 0cfe 3a2OIâ3Xt`,:Eg fǢ3n`,:Eg 3Xt`,:Eg AEg 3_XB,:Xt`+3XtF<`HEg`,: `,:!S`-,ї2$y~88ϟU_"rZ=Uywsكt^#ëx^ƫx^?EX>XK` ,%|5LX, K` ,%|>8\` `XK` ,%|.m,%|PU` CbXe1/<\c"x1;fcwL1;&~cwL1;&~0;&~Go3&~O8 c(:&~D cwLY#ͅva&~0;&~  d t.W %*՗uAȖz<Ӟn^8/KucY#LV:wo=jUµAl3 )EN5͌Ͼz'r56Z'%l<_ O_V4 8=ygFxaqt~ {QCZhZ}{-L9u,pW_/γ)_IBL{U`֙~z9tA`R<3/MO!97vurw/1c5CmOo<(Y_euo7?v lY5iṶOQ`9 ay-~κ~U-ay=z[eRpSvī _u*9ϛ`ezeYfU|UC*[6+T=Ssb35 VY|/}XYfg+Y;(Il.C~(]1:_,fwY/lDV/!T7 .Ka@]ꦹHS7 a>gA=MW>5-צ6v YƷ[n*,NOa;ZG=`=,?[vU=kE[Uk?Ϫ/~Oa/ʯ:\u6BGژHdXk"dOu2Vɷ|ҝH+جРI>#uH$:~7:ѡ)j>NٛY@eEH/%Ld$31ɞ1$/c@cI.H-㪜4J rHRLmucl,>JQ$WٛPMͥDOydt-uclvAC]|$ɥ ]_;ڢ.ͫE\CbߓFJ<-14Gǘ{9jY Ԭ%qp{8y_g s1քoEXn#l[ކlu%OFYPC깻Cb=׵WIxURԸTux=5;Gngsksd GrpTqr{hQbx嗣&4.jXbKfvJr44d> ]"6({0[o;[l-QdzzX%]nL!JyAdЊF[ 3ŒvZEQ`Q5(z(gUN)2ӳVgSζcJ9P O\|,;yzW:%=nNQ}J1 JA);qVՒ~Oӿ {huj L jwy:_.-wJ_!95t^_7E16;} w$9`$o~[뼜䩉Jg }w'v҆D d>UEǒ?\Dim:d|)SL赣iR3q6KcM>$/k>0ʽKH(4D&YZ3ƚ%} H^|,Ho}9ɋ,=c|xݩ}> i41DFݩD%Oh5/JzH^C,,>Zݐ]!K vxCA!n",9Z2ކEC4mQg2·C4~R 9iH= ZACZQjQV8ΦO_<*/ NӐnS['6 ,;S͈u̷5ɵ7z]X:»vtR) ("kmdGا nx,dyI\49俇-Ymiv;FEU5UdWpU˜2@Dxt |7ry ۄV4]O=ıQW9q4lt$wh0YɓǗJ<@P'zmf<3EN6q^z<|sgBۡmF۞t($$$LcU/r"ԗb __Lye)ڛ%u5I d72|U\I22e #ylBēڗ֤B_Nq/.׼2U<:5`!D>Wb9X'gB̞s 7>y2zB c_ d-y dQʭZ.ML.=?햻h;N9K f!O_%e)鳖XkKb`Dֵ!E[2qƷ % R {}G}:Fxd\q+=EcO9%8; NUS6'gzw3|QT. HB_o~#3<cq`wzYJ/DU)bmHq^+2`І@34[ADUXۘ{qAG aڒ\Tj.Cj| [ Ձp/a9 ! 8NX#CWr=(5ݫ!Cp Q'dnK~c@"F)c)͂EOֽ)[ݏpP\aIil}K3<|N\f厌 )L3C" H>6{tY7IC :\Qt/Ҕ ~]HFxd|%䂯 ``-gDNuQ[ς8Lw zOB<c eS 󂢟7h8h/VqZݔRE\:RTWX o#\!sekDVPh~npłק;ĄDxl+Dnqd8"@csQNؾdz Ȓ+kD.pߞx&6"|Tq"% ., 3%d| 2j#w83v‚ Z`v|{!)Z5N;`̰"+Omr*GneyX4,RYY`}8NjNcSČk3͍W UFxd[؊C!VroY{ Iu"UI|ŠBZrUWW1!"w&Iψ}m#nd<"H{{`F Rge}SL*҇/q8 d ] >_]pkt;m2- vxn+!Źh|VR[H!m)Dl]L13FRHޖ[t0oK{G衘!b&s~6 ]Yo:#/lʠkw+r$0[öng8d J$gۊyeEUL+}{sG d"γsRR䦭) ^jr%JC :) CLCC wՌfQE/Ǿ@!15vZf0_U oK'rҫl}6Y|•^j0)Қ"oO}"/כN-D8/c-h(MLpRhp Q856UJ۠77:y2ؠJn@'|5.^ !1p;JPc'#iS* 5-QKx;/Wݎe e\I2Ȍgx) T$ƿ)=X˞GX^6AӍ(/zW!3wWl=l-ȴIU!9]`\QW5!vjjpN$S'%!M>'Ce\n j::, ?"(:a4W/c K~No,{CuQ0>8zK}r+ʵHL0CTYysH[ y \f\L8PYIzZJs FzG9F<r-JAw=ۦ[6l;Օ~m~/U #&dn@ęE]w [;w%-Yzf:7򛚒yiju,~6sQMF C<|c9x2~5fOꮶh1AR\C5)Tqr\&{n@707ɑn< Ȧ#o\48"$V,mEcʱȸaED>o_haSrMT>80Q_vSh6R ҵ\ueJ[[I&1AqG͘5Oo8ɪ.r{kكcJ36 {KJH*DAԆNc? 1u {iUqnb]Cy:T_ A8Ե5#&.؏Zǯ֖jpdf(J$#0"+rs=GqWmn`h[M@ Wwmö̝Ąͯ-2ԣ'=3F>Wn91_aRxQmNm!U+S5RzÓ~< <)B>S|SߵV|x*cpu g܌@w0 mF,7f kK=K~B;%O@"9S Zb|ڐbmx3{xFt7\=n!/'#gG-y|jChIS&+sFN6qof\FU|lmCL煪#<2lwB-'痒heaVD&3$uUzZ[Zo??y;9esÙe^N~8NNNxቦ6|־ Mͼo]n_W\30"{'piyKf 8&Fi\|N.55J ;BF7ieYYD2Ш)!ߦwjBuA.װV&+Zgn#208zL"k7+*`ɰWpurg|US?@#]~zT=,B8u`O(qBXGNF+NvP7GbH/!vUiBIN N_\Ӑhx*%U}r>uzu"x^GsUCwu[50nJSċ:5!<:AĠ;#tzvMɱƗFX<͸,,(r΂Oab2!5ZizOl}\߹c_cp 15Fֱ4;>fU'=VG_ܙ:yeq|&ÁBJÁs؅O}wjJq-qU$Ts#Wq~iAO.a&'eM{[о nC(!jTW(IKT-?f᝴}'>^du=<LJ>;*o^SW>S٫K&|Wbv#ߟ%!_8 q#F&]C)#_S)IEig Bh ji,eDxt0у|'=/$I?_x:y62V\OoS䔯uNHӂ8QVSOl"Ң|tGs |&2R$i"vFY*ۀ_A+ 'mɹk&cM &m=rWt%GG˯=6똨Gg="%36BF>=) ]顗 p\ø.!em~HTJ6.]6]4 򹲗*Y|8|ȑ9"gt~&4B`"<cWCsEb"@41vI;uk?E~~$  G:Q_Z"H1uZWs ;[\jV?Kow&R%o"ۗ+{UZ Ndosˆ!e׷ǯHY7&F3sqɎZ9Z/SYD?NҿW&4Mqp|D+CҴ✡xmȇB@+hv͝ExmȇAj(ak3 sLYǜUͩMuSkЂȇB+e]iOʝf]сHd"T26i#?B͘x+Ⱥ;oC0%]S>ճNJ%6ɁHb2}]cG-9X}΁B3u&zA#,~ǎἨ׻uK.gZZv'[b+2F=N=.^{\@|N܍Z!]V5!0d<}0>;na$-=z$CKx gߨHoae}E}B%0.9tsP8*~{GB Uԛ׸fXfLRN'kZ~GxˑsŮ [>h&z̺#hT5-H RO тF>;5=徿v ]er;_rW.(=AU) iCsk*I ҡQ}4jc%ry,P5H씖UWUߔ<&E"$uEB>h#!^,>n]حzi Vie}tI"jeIg%84I\()\ܢH3Vv4<ԶACAXֻX[zBW{F~6Z t'?}Jl궂(T'E]q.ޒ8layRA(:Y}<ݎvȇ,yދ?t] <ktO/9rJYzqP|~].UZX"8TQAIF @z/ |Q_}<4iG<'pؤ"5 ?7VF[n~s$+~ղ:p`ʊ~g2n|f&jKʿǑ vܫJx۾ߧ`Em/]m8ځL,jqlZKwp'ןu쐳"\:q>Q')L<$azY 2xƘ+Y$5ՓǨE`E#:ZX!Y5'u̔Sy,'LIR>hG׊> PRR >+Hu9ԛhtK2F1$5TB*B<7 |5 mgd@}*0կII\6Gׯn#Eg9jN }wVJIǿo#V6}A3=/vuqraGk/S8A0.f()\Ǚ%$R(M<ʭ@ep>| ec;%i밟)Hf{@.;xdGFTQc"T&g mCJJFuз.%espG<1l3l+kpMO[c^iݧZ #38bָ2 } \oʂɊ0=:PKpb?8i!|:i]2o̐`QWih|#38|/g9[<>u#38݊[anldpCR6R0YQ4>R8LC|e+sd̀uǛYU;N>K`ģt {<[wNw3fR_H}s+F#'B1g4@QUY Vʪc+2S'Q\?4oct [=2R:xdy-E 9Tڷ=I4HA÷ӵ3ʠ]frGPDϿx7#ح O=Yl.3jr|seMiБ>0"1:kb)D'MR=+Hncs fU~{<2-eP̺/~(rh5ݾ?U4yIfo~j%dIV =,({ttf4FlL MKpӘ&*L˙ YP_{[;xdGa<\cW;6iɨ~Y[ q.&6XW\J&AJ7L5=LXI:M !dj=Jb#\7c4m܇z'V<bάၒg|bjӋ~pd@Noh2[v:],t6 8rh$/Bd.(2mg~b-9Q%{xfD0@&z$ᦧu{tf(Fm_ G6TYdfzE(^0&f":50e37=. r7ɄE7|:j`nJkXtĭ?I5cqv̨r|$@skff{8 W=dyTܲ(&&K-yr!o`kCelMgFޗq:̶Ox)JPQwhP8+PJ+Աƴ4Kzi1Ᵽ,3#16] h1x@3/?f낦JsǕѤdw $s6e=~Ip*hҿN#j(j-6!jo'5?uɥ-L3t'eq!"BVQ-Z\l2DGYg+dśB5YsӟC< -KvRzRwF>ptwȌ??Zzׂdw &,p,&TW"|]I4^[[ Cό hc("rMY<#*,e'jW@iSs*xd.^nQG%+c/zdF&gBW@smԿ(Z#x̽UT%s& 9"+q%B,.=p MI c+>=Av[kp{B/t bgCl/XɌЂCjBF)_}']_?3.Q}S=jD)G_5rEMLs$GfpFȐLJO%qN BAAr =~z{P#t 9ြ Ph M)%g=|=9}fsx i&W̎fgGā@1ǂ#FsE4dh0]3iGhXLJK,CCgD;xdG[P?i`2֍cc]CBg,y26j6WWH PܖӜji!}ū [;̍r:f;dM척GKOV&1"f%(z^@0쒛ٲO{d2(7eJ 9~~V8 "ܓouF/pq#38za<)MA>X6/ /[Yh~GMӨgfF^Չlged*$ iN4;u!9܊g'>qOr v ;dŖ sQ=H>SlG/p%iAAfyb⪙UpڪJcB=]zvQ%J׿Xcq^Rvg$zJpC7Z+& 1jLJ,~}/6;Men#߿B4k~mVׄl8jtv_?a:D:TiQ='tn#p*q׮:W3"JAQs,u*k帱2\ Y}d !ˋ^)@'qWhɆv5n"-9:nn[x :Ȃ;RwqW#7;o#!y[Hq [o HB'|T=..q=gUwz{Gn+J@6su bg V[$'Q:YKhn#ww%rȻɀV.EG"TC&a1a fS &(Kcpi"}ǽϟt\ C2FЙB!}]䨄?e]2跻UVm<Of8Mg+;W]D:+b]Qn6~߼9h6/U[+"ĀM $L @X7d˨q(@A%˗_9nSJG/yRLNSP4'4upʬcNuD8#4a^=zٵ8 6W7jcD̀ɱ֙2QY_XɡXϤ R'8l/Y0EW7]-cRmrYp,ܠ{&V).Bf ‹--{r~Otz;W1)A'/gJy| `‡S㌚6x1Ԧp}ˤ|VV7`@jcX+F)ze֊+4[7$RbLPűBj^^+`l!T4rBfλR@W^+0Jju8Ƭ2\Azo5SKȠRȭ2+x<~O%jA ?J#7 XQ2d1JYG߲95g/\dKܓq1棒v7XQ1 %!`ElPFNc4?v'C9_BDt+7GڇI;'.-zL E7Ç?PNs=}\m9P[_z6D[-W=}>>ȁt N]IcB&DR2MRPu2C~J 8xʽag3`ZvaP;{PW+e;j.T+wRˬ ؙфЎ g$=a*o yԬ{j{pd-ݷc(Q֥sFN6ꍝՐa|rRAe8C 'pLq @ 8: Y\>WgkF1Nu2|lISo?Ͼچpa:ĝEdTD,*YMl\s[X̗JFps [Dt˞H6$,ty5E‡ uwJSYcիp(W5{'}p$:QHdQx]#;c-Q,p5UHDuyac̘ ec@Q(+vo}̀ sҀ}T;!'VZ NHKE¥&8vYŐ+! |5kϯywp`"cZ#FrfS]m}fcL qWd5 &7g>Ґ"{Yo4'? $kIeS(Hx)êcByj y=[Yzm|p ׎x ֚VFUAseE<5O-|Dm+u޷C_q.0 Qp[i%d:9C?j5yFkE+w{xҽS.҉\-aw7ovy%ւ:hm7:x?ޗsbRdC7»Wt BtSי{|B]6+slw-mJuLRr>O/| !p4{BkOn(bɕ0bCd,c4+tX1n" B$ l?3ڨ@;>_Nw" &[^2 äJ$QFE%8J Vzm#!St$s  IFPɸ%iP{CBcί`qÕI5 &PCmʂwpbxBr3`Ѡݻz˧Spłw^g<`\n\|.I]H3 32ʨ]]W! J2Fdbgu찳duL .&H8e☨a[Ɩn"櫼GĖ`GxH1K-o $` g$ձq#("V9Av)6N%y!S1V&8Uq09X$N`U̐t!9hiP(R.MJN)pc^5?`Uzư۞Z 1--1i)'E R0q&ڞm!0*'=d5muem7yX6>䘐!M-#h9~'Tc9 ()vbkIP U2xcZ/FEL?FUOgR:$ N7`dqDZc^ X8N½jj!Sqo #ZgJ&S9?'wfGZi(πs+283r9qF/ TQ n#Rag=#7J5e1 #E7LNvmm;dxe{kAhQ{Y'9Lo'Û4uM7pcq1I gևG`^brtvڼ=#Nܷ`NmPY5PxDA-[^LIoPy=8:g`_XMKXrǜHk=#Gta-I2i2A@Փ/ _oӺOș۶]q-sSы2>sN0:B!;»}՚MzzG UcAgHtJt48g:8hTJim\Kn9<x]:,3n\DwX\Zw(޹V- k+=AIbTgRnyጾA;GjUz>)V+nHJ?FǦ |#Mb`8)'R014A6ıSxX6 /9H=[ 1KSh}pkMCRG+HaxN6-*\G{F'!a30V j4m?g2XNch#7@H$R&ΛaEQ:&I0rqTY+qU̍頚uai&(̬/^8 q Fp ,Hs`|P7{ǂ<tD3r/>(oдz@2= PmI1Ev:?O'ylX+Q  & ,UK0Q_iuը\{FJ_n,_nUX oiڠx3r(RmacEs9"7\Pw=f} w+^S Ƈ<,矷V;b.rgd{Q+ L-j yI ;)^=-~1.vQK4+ i2!?F5u㿍~ϋɺ5$v>Py%2ǵ9LA @ G>NyFy̱(#-bγ O򮦭B>M+Zɾʼ@{0❁"1=`{߯Xľ^<I՚j?$QluN۩ґ5`Ǻ_l+픮+ifwEyZNEtzѻ-oǷ٨ƝoX{ܷxT髍?L[-y®<Ѝ4SF?v(_uNmH`j6dgYjn뇅LхQbS/|Y?l ~O[]`tfiMyxF%D#PJIyzJ5G–CY-|[K=ŭ&]7@3i&0FP:&\\q  5lkAW~m){FKdtt[xVm׮:_.r+=CS"^;xQ{b|˞!7jq/ayugXzƮ%^~7=EBA"DsuO`Pwrv_ݼn oܽFVĈ*~QjaNY"VwytG,eM yݴYrl-VTo;nꌬKmhv>O{54UV;h-ؑەےx7G_㉀R+]l9XK,z5.|7}1jhI1dlM#>ފmlz+Lw_a^ldPG\r۫8bZ4#l.ndN)=J,EfCZM4 `֒6)zܧVN?mß\ cm#\~bH70~O?eRwߖ(e'yYP@.U4fN_֗'Wq1V#]Cmb"z1}>3lgL> ֿ\\MrR{~>7M>M?Oӟ]}kegU#w]s8MYfs̟ݸy>-`ysrOXU#k;N& B}6iz/;5G?&\N⯦uG,o,慊@˺|՗fyfn=ٽybǃcw1. (oL.WМ۾s+&N`ﰡ" =Eĥ%xp/뷞fM dZIF_Ym6B4oғ% qvcLb ^YJY!WfQTu4 u>鹋n%lDw_QW@=Ħtպ'| aDjz 02VuЉV5s\#A 9L0LibP>Y-.c*fi*%c=$sP)c$3NËv(; r* M@򉻳_(s9oAl `-wMo:lQBm70d+Zk8v8G7M]:tƜÎiYkqfEwBȤ$Q#xě RtJʨVIN9$ʏsjY1SK7؍v)%x4Su𸦸@$m@\2*ԈhQ"kgѹ]\7NGҚ㢬kʺk$ Ĩ[* 61Q$f:U0 -i EDg_'HcmM(s@KBh,|tF09}`PWaH*R-y:"!c)N~^kh@E'8>(U ) 8RU_ָO&DNxn %'L̋?:gs_E~ei4e,x^P(sQofF9ynM?|_35QJ°G,(ov?<6sVT^>o'Ƚ Wxjn.\*AI͓WLq\V1dbբÜbG :,3=)N#ރ-o )Ǘq΃pSχSk:\}ne'l9?}`* u08B2 rpB5{ؔ  ; 'rbgx_Of͝ı-%pH8j=!HtBytl.Z7@ Qc *߯&sPDYY/lUwnWSdOOv/e~{|'T(rK(szj^?u:'⪃{'|9!^XYUvؔ4ϝm]v; Eor1!Ro%֘D ȄNhRc1%V8I/fУAKtS9B 68vvB>Lh~ך7Y63 "8HpF;@3)SQk>Z@wDWmm˫=ŷ+`W 7XO1EhA~NqxrpǒpZ~EeN'-C{h{Js%&Q}=?'wN(ޞs/wγ{ggrxO``gD=(42V RVN-^t.I;lڲЎxnhghdZ6vI~pmZޝ+~wkz`7 Ro^Ķ&\u@5`=QݛAwsxt-ո=@Za:dgLWDkQêG_+`ݕ<{rKF.U1㭌 3V!JK)+b:.ߑt^|[Ӌd 2&hs4;TR }b\B хX'*$Wu볋qX [anEHp..|_ /ehOϯ79EUN>C5H!W܁EiR 3),8gb0B=RAbhBJ~ }/aehiL#]gmȽ|W #; $RqQꭑ!U9M3Ux]V:=Wa m+GF77†w!0Ŗ~DEVC!w)&Ȏ\7 QJSg-!IJ!}y0GAI$ŌSRe2TZ T{CMbПG*:#b0@{ʃW$2XKVB r@ D9"RAZ5n'2&^66)|4%PH XԖ$C)5 #TF)ʈ5F梩{y ͐*-ViNheQԘhu^".dX 6@Žx&x:ƸKytN.VI)IKy'c*:t, \iA~OktȃZ*Z *u;[')0r`h+ڬ4޷Ū*]o|%ć>b !Uj]Qm$Y?},⃸GS~eMJBc%5Z B(E1d|R ; +spxN7g@WBݥ#*\?yӿ|wOG~V ?rei|_{}"Vbt l/ ÃeYegc񱏚`JXFVLk^%rRAenЁ)exn &UsSЎ߳u|j=[3P>>2o=$k ka&a8.9ڄ_.WFJ <S}HR gsSQhvn JlsJJMd3BDN H;8=^u5|7n*-ί'5ZBZۿ}c3>3 lQ^ LR G-G ',RiZ31PJ(o/ʻW/~65}{~Y ]Ck-Jn~tőpMh{12eXd@OfQ5-:,|#Wn nD0ynDpwuE6GJd.z@(sז`64Uw(VA;<`&*_M\n#>ay:Ȁ9= ]6 2Qf$ @'DLqc_jφ_&mVmGӚ$:jW2[䴳FFC4R9XV[\.itτܢ$Ol"޺/8X}v8^Jwf<(5 '.lLp`Xq>p ~cɿ2Ks`}=BG+9 3E0hG DrءRWUWtNrr_?} p{F|,c +%o_>L𷱸?2~E%aZb&f/|U~([Ө-׎k?+Lc9KQD\rif`y/YK"U7[o;RV͢BF9!@1JgDR3PP?_&~HIj3{[84<ۥttnnmvi2ve^R㴒7CPqÍ Z0yR 22 }/e5RoNt폌!p'FeۤO?u`Pt|Mt|MU_S:}oLrA}qk@"A Y$#H爔G;SǡDN4dJqRvGcH~ُwٸf9Tʼ⇿WSp=?vnzIHsw_>bET#+# ѲbR+HlDj+bq-Op6xiG}Wʣ cַfV'.0rgptjAfW;D+XWC4wIkF̛KSk U+{kbN!1B]u^J5^Z/cWr6g _Q0t.=)opS~[1[q9 ցi6?n6?QZ~d_97s{`;x|a(Ke/wLOyTNدX|qDhh6ڬ6kh6ڬ6kh6ڬ6ke!\O1w=qS`-(.^::rS+R3ٹ6>6c>OHo]}]%X..zEG;<;Wx堝+|ض։4-F\l o]q_Rf t7YG] w[Ě)R"LePIr"RrdpFj'O3X 4YeMhL nǖ1bӫ#lPz;U8^/M!^;Lb XMGij6e6 f,~1~' M(zt K_ H|SeR#e6`$-6p-4ǔaȶ~R:6K}#ߡ`xKlJO4,S|Ls%5_2GbR늝T2uXG֑in0&9E- CP:47܋8|޽hn/Ĝ/RW.z~FE&^,%WeD*'@5JhD l܋u+¥&L4 s¨U1XQRkz1ȳֶsgt~F{޿ rVqV[a5)Kc)Q+Om:)twk20 ِ4FK=P"!uW(i# u:%y -D4:&(f,4%pD0AP8@2uY,X]MT2!gZa]_+ØB o1jxDA(b\![eXK'!Zj?Ѵ #Yj(%kŐ: T4$ :4(4 t rQ\H2ҺL֍8H`MLU RF,V3]AHbLŠe.]p*]vu4^oU>`tED?8sjq9H ^R-Y1/:w_A%!0ZcRv:@ϟ0|*S` ;!+w>3==WXYv=zSL}Kb64Y3{Y.KJ`wqIj{H%T refi)Y7zz Q7Z ^ɐZ E"I9E]a^J(E#L;)(]zU{ ]O>V@vD-6`M@ZX.*^.Xo_w;!$'Bo4iy!$:N=0QQ>Q*lQHSd_a.ɸWBU4`$9NjL}nRݘSJB]v6nMy=o)ۅI%߮7B}#f8|7ٯ?݃ԁ!dbUe5EVJlFmW<7koZ"-҈ch" ǕD(u[k͠AS&/s4P, O{z 3h[;@0A)@3sUf<=0Z'[os-"B2﵀j9 lToeMΓ*yR뼅ZE'-S8Z[7gaөd ۡ-sR9JYJweG"7] u{ /aN(Esf-hT9JzYj"]n:vr A56ȹe4EdvrbrX+ID`8Hh^cpv6n:/@!6{Ŵz2BoSPX_JP>[E4@J ֘7?5|!r*gRbrWܷ*!*_l+wv9GOڙ"k/%qJ7Dd%0lVr@&w9S> jwL8Yǣ !zl `:`!ņyɈCTVERig\J#-¼#]OvS4^&#[.2[ap,fN+Bm7*Q#!)Ġ.+F~Ыq6 "msZbnp9I"-֊iۚ^z[S|G7_7.4,M^b($k9&ՍVB촱Z7Qsh $*dB1)`XG”JNyCQ{0rHRBHMLc1>ib8qEm9۵i3MVxHx-89XTCAJjXZDmahk6@+nz6Rm̪~yzB߉H&3uSm<'M~@q Rmw8@д)Yܺ~7^i=v<eS4߆<\\:*XaT7ۛ! (I3F|_\xv\T+5Z^|#2 Q0m6J!۠Ln@ARLm6A襔+55)yDHGCE)fHS+&6 C!f=,E ^ wE%KIrފ-g{UްWSiԥwe't<":Î+Q8zKbOԒxVV;Qe5( U0,E ycV8 f!弬GAA{AHmHE@䒳)EЄvnz6~}ZgyհK֘^(*{kHЏTߍ^0>d3H_O]_*:6d~R|rzKg E> * @.|w:ԏq[U_1GUt0*LQ@:lEn\rơӨe\ʤ筿zXuqAgl4vfhw. ru\yr3=fkǛ~ ۞wiNÖM2|dLZ qhu;:v!dSgu4 zZৰ%f%n6S֑in0&9`kiun]{ov* ӤCAhS' 4@oTa2cdHJrq{.MREkĖyΙsΜG:~5ԟڬ0}x'/|Q![DW1(oR9eс m*j|ҚHn>ϩˈƗg[NϪP.Ric<1qNEr+bO))Z1'9 50) *%X\iJ_6MA$mF\m7hd"SⒶP XӞ%iPϰщuk$ Dzl4!%cH8JMunQDYӖw? _5y«;fͱyTD4&ȍ0j {#v i=tX)3*~^khN€a} QS\s.^kbkf+ٮڰܣsURPFeY,b lZ2}ޭ v2^]~?o)-5k`00"\uaGxoadjr'mΚ󪟞/;6M-mLm*H^Lƥ/O($yi0fek{=0T-9h@Pq^?dgiLq{_e4XBϋߪ?K#bvlfmMݐfg[dfe0b'}r^r@rWq7.|$9=Ѡe#io{J2F,^mo6c՟-Gˁw$olF:Vl~3; nqގ2HL?jJʪQ|L")Hkݪmyޒ'рҙs(^PLf#g+sA)5~J, p8)jJl e2DZ‰D9,r3"~q2r=FJ SqC| 8Q*\ 1PҲAo\'>S6/2XH))cs\ -Px.-@;ck1&ֹjQ@xDLjj JRC=!F;aÖ(L1%L(PCZ` 5XDD'}NSe s a0dm24D St\O_:QR"FFQ ,b4Xn³JH)2p 'KN3\&N;Q'~1AZo6u#09Ê<ԿODP*|0h"y2hNl/>x~>.l%>ѽq+rSƭdvHw]}9aE3a_c"-: FA!H}883ZsDy1dpTr>G(T4L2{C_/*Q1/q~=:i^[_pK"t^]՝NWuUX%`]v4j[v^o.r\Y);SNT" %2)P eօQ8a9 I gЇVƈb.-;dC_pY3F΃IP@5qp$޵ 0%,H"5Cbu@Z9J+_8n:p#^:1SpfB zX`[Ie8'ɹ{ZDS4bbqv ĝNg DIw3ڻ'V6ZI#|wjq4VGlƧf/&Y$i,52HLHk#UHBZυ-\I$9~Ur y4_H)-}$(K!&(ŨP\(,j1xFzڲo#8?,n1ݠ<)=Q)+,Za>i|Ԃ̌S%̏HX S-8JN2l~)or#<1.!qo+[Ko_ȽF=l3MuSi5w;N9~Caʴu&fgd/aN}2&6 DE8HȒIW{] ةJTc 62@ޜ'Iͷ߾*87@JAroũQjrt;s5?8`BhL.Ӫ~ymX% P^`Y _~~Ul ()TAL_>&=8ӆ{%իYOcF?WsxkA0>L5s!vRh^:k8d$O塳x Y?efkD^=r Ohaax,POG`O49UL5~1W A]:Wn!Zzܦu~=GdF@ JrHg8 G^kjE4[љy  >ģWvă*̥3> O,s閣3aɄ!{ƣeѳW&,}HeYQ)))FEynk$v6"IJ F2Wo{71@9#-7`VJZ<*W߫u///^}֣tYǫnzŏ5d #w `* B+Jq/bTWZۂξU]Z{?G;N)7P7h0xݯEdjRl/+|EUҭ_)0Ӳ]qߤKWs,v  0q9٘yM)[Hg\' A9iICPNIC:Es9cg돥[i䏧8gHTc4718N9C8&QMfAyQcOvLoP7 72#0_`<[l?;! #[M{asU2׽>\t]w=>aն.iރ9Y)͊sÀ}:Chu9Uww9n^ͲCh,Pݵj|=7ZC*WsQ/咆8ĵzG2%4_47:YЪ͓;'n'ٱu#YF=3"#PHrF"H)b~&8DJM)h@p leV l!a=O\gI0m8ʚtFݫ40%%? (±p#𚌏"i?\|JJ@ZQ&֜H'S kc' o>߄wRZ׭њ*ZXx#h+zP!GAĭA,!T })CG]{R :{mIQ}Qpv;u͋_5@v87X5KFG(fQF3_`RyɝAQTi\Dnd1at<`M!@w~AT;-1SA,* 4R%@ {l XGc_M0g$ˉR&$<(bQ{,YE6hlѲ:J{LJ;i {hu$L'v낷?;JG(hiDa@uqZsZFP.sam^~Q.vF 4-5X=*ޔhK/܊oi#.K * ?~_T\׋!@cXU^0D6顧_^aW5>.xaga\@mˏjNp'υAὼ<H >C '4,Q>R^2`?Q0;{)s+0b9ĭؐPmSsuL5Ѳ<>v܇;g/#]V`nVt N^6FGτV()5܎DL!MDO,hn0KKr` l/L >VestD^f=Nt$kG'½ȩ%Y-p\z+a0]IXBHL cP-75YC&mOX8EQGM$: e3=zhcA'͹0иqZYguT7{hcBPP aL"[c|衍+P[ }jR"7{he'OF1i)+< 3aI>=1yMu۰'.BBX)5kJ?{Oȑܯ0X%YG֥?ز džJ$@Kwg56A$ 1)3dWVV# @ޡk|Ҝ!AP9q G֟3׆e'ϏrcB>'|Qm|&^ɧ!IKb,$&1t$Ib9 K_Ät6|>.myؘ@Aޑ(m$I@mHȬG N $&#@oW0<! ʋ%vaÖHül΃ܼV˽lNgǹvsTֻ2ٍ6 [ZzbYL2zdTw4m7O :̴EXB[@%*2ZƢ *H@hjm[_gɊP"` zȂltqG@hu,BhJ1b@btw ?B wBeN x@@ PmpX8 b@hZnv<;mM6d$!F%'Po343"L2{^^8kl{X~kXp]/Tg0~Lf+#\*rd>K!Y\{Fk kMN:ėfXq#giDqҸYI9[YJ$|No ̼rFPG2~ !^b0"zv- 4O0ooq]=#j+iגm1ZYo~Ŏe|q'hw6P+FUb8琝"js"qgM_d[^a:Dȣ2,j)U) %9rJ匼xl8pJX493-hAfh٠ &xΉڭPEI̲N!c^;,E6 u\Xd+cCxZ)50Α+ 91K/'rE%nB ֔ZrBe$'/V|n8tt #H4e9 $d6IQԧ1Z \s /DŤ dk/B&.C9 c @^+|^Usb:b[؊Rx~ ȓcLtI7Nε?}&UHoqR/wwV[$ N.r;n~ϓ>俐blxgLݹE1&BkEچaK7oUc@VPu Z/yL\RbtZ#RoBG2G!,Yuc 9xCCûNP872/ 7-SAB?&( v8= >c-FɄv?j:j*Tx0kM5GMs@k&U>~#GM{@ VQ{dSai >#x׿$t3#G-8j6gPa>ߌ4{)Z!?[1>I8V=~dT_y{LGMwPC8k=p߽ :xj2ӭj155%yp9o@3E ;dsjOeV.Otl`5M=MH4\55Sm=iʠѸ@ֻzS /) D[ޱW:%WEw99{qtM\ۋo|uMo9jn2@t+ҧQ<:'. @8:Tm(NWio^)C\Ƽ󙌵/`^i_OhD=WweYbk=H.;{Gl*m%b .{]Q9^=$DJ2$>}Ҿ2^0Y= "/Sکe=e;wAH^^["Y% 2U迳dzs˴i҇>kI࿜?T ~8w?,{9H|Ժ>^pɣn ;{yuϿwv!-Qnٗ͘ѯuxڢµ{ٵ展+3i]ڷ:a6nuwlh/.6M[><pJ1T9lo45)|[h2-25R{ JY͙0T:L;Mx  ɌJ*P0R nQLENX4I7$ٙH?N1%4Su I%)A%V^7+)JQ1Z"tAGϲ66آ4љ\{72ƦļMw+r|f08XWVQ/SWhb&AHLhNh5Ձ.$&C,\'w {詐2J|l57SmvL|hnW2*vs^rsT-*R%7c064mdܴsvFxutMͽ{luz؜P}(;TL*7Ҥr9֛ʤpE__r?~n`,$#3*0ztdBƓ}1:bt6V'sѹ@zOVEXf.ːqEa)9S, Lm-rm>n7[NC~V/So nb|%)ܽ>~@ZF,,t9 U;zǃu0km[n"w.mn|5vP K۫@4̟ ~>8a73ü5tn>j7}/ C=<ȴX܏8I>s|pz:Y,5s1t4/ )gs]t/ޕq,H}I!@,6.6GĘ"oE9crMs==uWuSYoal_ؘN͗󳒁om{8F3p.`{\>z $$fgU) XmE jhI̿|W&0A.MΒ(w6[)w>:MbSg[LJ|}ȧk`y5&5>K[.JYŜ` j"AwaK6¯o0'w1 naݭ";fɖ)zsz &; ޼E$spmnhjim/ZKe\E1bN1W2뢡ĊU +ca3Y~6ޫ ]Iȥ66w ?Ɇm6.CXDrdVJzG{xӞ8w[_G/Ŀ0c»0s;P60Lo:ab}Ɩ47m ^8{t,-{~;1Čb2> 3Ųl$NJuh|PzyD*WEˊC{PᤁW7l“nѬ#>{?3?$.'T-a¡ _n. <Ψy ?'`tUkJٙ~-S=< t~AL5Qwݻ>=ol!?qnsVX`fRpɬĶĊԉ~%aR6r3ن)%d-H}+y+^-KنU(`BƪvV#oZ0H/B!JefƳ"HNYn6%@PӖ *Az[nؗɛyGxZW>fYlf5f=ࢹa.U؛ 7`p4+6<BʦۖXP MKi܇4BYBrAqHwl=s_NB n{uyH2ǙW@ED *1ϟc^)!9i^D[uȡ.;9@}K8ƁZZJ"B8K{E锨G#,Jrk)~wwnQ2JuwǨD<=iAgA)Ћ5DNo! b;/;n܉UHo_g^)o~>ٟkF&{ odZɡ&xd }8$ëY_ p4O"o2 p$+޽#(LtCY?o]nc~y0vǷ7')ƛfz=;4MԇOo?77oBh Ӏq* N۠xݛ ˦,[xxjf. xuZm  I z '0E)&.hH I5A TC1U,7^%gC4MRjw<\wi8]|t Q L[+(߿2J8O!}J=(4xJ J#gqL80:Eb"M[P>)7HZ* sK=$ 9u͠P[)sIGTMk$;[&-9E$_ -':qtd ƴƫ<*4gmmg2fgU)lliI# d^HDΰ<Eeq䠝MRkRYuBE%c\#i+%T'nIQnwuYZ$$FX'ܧh' WˊAMF?)頇qz5QwG|!JH*GXr QEKibģuVKmX E)A0NB+G h,I0WR E: St`\N: V SP &iQV.V?l녹-!yY;J%TrxR6װ|r:<>61WRV <* Q9k3A! 'ͯ#U麃~.21Xzl@ֆ"$iqm)p n5@/ <1 DPi6AC{g1fQN31sa*Q>piSȳ1g{c`HʹI{hD1L=tr&*^ vŋ{Wbc}lepi#Ϥ EQMaݙGxem`˓:ŐmSNX!I+ȇHdȌ V 4{Ð$Ezc VEL;yӈᄘ֒ aZh Bk TFCEe-۩g1c-P(M4zu7+(X 3(Sd" 3m18Q$?EڿkGu+01@#II B˓]Lu,W쿗*,\1%+,v}$~}Y8zwoX>ʢGa%eKÎ>'iз̜.߇^WjV2FF&H:;3άs=Nk>+5r&,so_|ƺW ~*WJHSI+n#@^?t{qZb :ps&Ǽe FֻM M.7HIE1 _N]׋_7rOǚb7u$ ;ssC:} ΝՂD]\5#aPNՐ\j9ʍp{tsWг_8# w@R .fΏ?#!Le;i Uoyr?u衫]E}Gǿ,77[xgY㮙}r^Q1[5 ԠHNKݥZ].t:YvgE6o_fTw~duUpUVexxP=n_'F% ~:GaF?w_;whu׼ά^SlSL {ZlJSxjm5zc+ބL{b48E6UNf D1Hy`'"*4苧A親1Kِ` K^(GyT8PdDs_wEzt67RmaxnxiԘL<ɀ ea%y6A۪9Ϛ)xi9oeI[-fK][-V^:wXJci+ܴfmI :pE,-k˂{0ƪ8Ϊʺ0%)ХU!<ů.Ʀmy1)ᣱ&A FQy "$j 7`4eA^M]^otv2Kv\f-։"-57D]7Xp n` |KYnoڷ/|Z0<~? e /0%BݳƀA$ D Z 0 Ʊw!!k v␹n[D<Aх%N<&`,N!? ũ,ipjpkx'Ns(Xm.)8+3֌Sp< G(8ǣxYx<ǣx/8ǣxQp< G(8ǣxQp< G(8ycez΂QvQp< G(8%j,bD ia}LJ[WiU BeyU B̷AsF[MC[_]3'p4Hz#B/UT9* TDୄf9I6:AI kUdK9m^X"%)C$uLyb, 2^6/dn|b+_~謧 SRS'#!8HCjjP @cxi|nyR4|4%R5D8Ԗ04PjeG5a48HPF2be[rBuuQHj5Ʌ7L[n.WC$'KAtIFV,,֫smuq 8x/gM,fK1Eb$=A< )!iJI>W%Y+Ko!%%0%B()j$A 0A& w ^{7 A[xzgP<;P(vH DP%T9 J]sWU2%RREB-\֔J{Vu-$ds-Ѱ9D蕼2Rb06O[-l4u: 8k ,yFV :z(S>pՠbbiևM퀫-rvps8[w$S`@oPk_%7 _J 4 ApO4Oɩ@ v\/q$ U*b*%?R,j^/36R %b#-wEcWF*g LA%kն^0i|w^<EnY7psp^`7]\A ]bZ@Fr^zfJY( 8L)JeYyka)z{#Tz_#jP;{ m#7+QK) 8r~/:L3o'@$DY6dt˽bgM۳Sl{Eng0< g2D#lQTp,(G^n,I7x8b}/KI/7 z5b b:vJLSIARP'NUFZUr\zՀз=QCߟH8z^ϻȝxVwPgm٠JHuH%dv{c}ZgP*k1A 5A*F 0r)qecW)iRaCpLx1:ȑ!6=v-g+|uBV:$ v'V޿hvjK1!.x&;HY՜JN쎩Zxq5'-".mԇNSY.\0aMpte4>=ngoc-8;anVH#ɔZv>~ɋ@3pFnaeٻK+?>4*6yh:OWyJ_y-U$9{}Ulh^)_` yCSPE˭͏d =mdJA ވto_Lv[8!g"ErtTX~d&D66at1J4\JnnUgF ~U_cxǏ,،da@/PBc<͋a0P)DE7+zN>G23oy$Q;"`Q"`h\(3@x.Ձid lJfS) IB %L1BZnݨ˙ݩ=ރn*>֦59ʣL1 ƑX>E&Eʹ)3ԡߊkkOZ 0pm>5odn0I"K\a>r*B -pMF2c5LLqjx-/+ $CUkQRuNؽYYYCz81HSr+.F=ÕӘl:"@{_Nb|}G59`0LYlΜU'O'B=K*L&s8kf=J>l⥁遖-kp~/` h-L?ﷁ?v7H8z!|{|[glBK܆X%*FEZ+&w%x&q\,i*ke( љhI:ii'$׾"gJu0(nM`B2> =c߳qG##X! Ԇ-d*]Cz+z. ˿\\]\)A g\Ki* a-JϭG%0J*}lR7QW6B~^V10n0?~*]n[?wY'_53R+D2>|/;\Y=ъHc1%yΠ|1XhR#3?Z?LHAdDܪ$sêW@;8qoq?] uƭmq2SNÄһ<_ˠ"F?pXX-Ass|' wژ"%EpФnNq)I/ԕX1;4/zͮ#4f]:SSOIؙ+]:z @ kR2DKyEHFr 9DMm$dSJ~Q#6%Y2 eZHu7V2>VQȽyxJVk#V ~vUo:6 u~d8~7Ƨv169 jfe #!-ḓ#CdUMWBkyK= !Rʆd#h :P~wکܜZjvPe柲!ms:{Qr;2 GLhčR5hO8XPZoPq!|4+L6etYKS`9k-dŏ 쑫m9=r(ɒ>!2O4*loK^vMEFd=AG11u@btk7,9m e.8P2cHN=ɋ57::857w_)OER pfR(/j3, \Z[T/.z(P#yip>jo0.H 2cJR3Tly g5ږ/r%nN6}>LS ,2̫&S"H,9*1R"=%>n]wBSNS"Ckٔ-h0A9gr>L98uh~{ٹpkjޙUޕ6$"6\"7s\`f7EToޑŢDQ*ZJnRUQY_DBUHȴLvy ץRxAs]iBP1|1_k{1.|̪$RzNQr*V-iN+8"PH /Jy3y xNk+u3hp`}ky}H㛘}7\#O;2@%8H4ѯpqyt_૟MF+@Xk+9!RcC'C֊ZVrzXwbrztvHtk"J27Knmv(7~mu*Yn=<:͖e,CX\LQOyZ:Vn 'y\QZCDŽIefP4ș(qduou`ubr{P&ΧgnRWNwvfw2 姬rgJ)%=s\uhRxU*GꔜҁCJ&fVau4wo㵽|`U>rg?oD Wp>m\ d9k8,8$` Gh9y }ó}FCw~UwPgP:BWTFtjp`> N۠WN鱚[.9Hm1cjY=HF@98ૺ܊S/%%37 a6 2x1Zȑ!Ҟ.jYPghbGdD"/7u5*~[4VOv6,pÞ#T»59<$/Ut zB(*L~/.kY*̜1xEenIxMsH֖s!Z**ZMBK5/B6$e)gb*5B$`赦&NBE~M7N+*wj;| m=Oi=O\/*ITj^cS#g`B &cGNQ1%c\Fjh]Dw?*Ӌmɍ5xmE[E>]ET 5Ȧ83>&[\eƭxrF$f"EuaV z_g06\wu8,|6)=?[Z8bb]7E%oD|3́1<cnYvk`#3f'X1ulګ)tQRkOຐB"+i2t;x1Hsr[.AҽYl箸HpEG?q+V}l)xt*@~G_[{^gLR X^ՌzD,Ocb?mg,!16ik@8v?zVLZO{.TPօd@_/E_Vw+2Q͠pp88}|y*oE&<켏wYoaJ4x?M丠u%B츘ږi˥6qriMn*˜T A)G-[,);z^[S1*:KfNFJ! HCrLP.cќ(RvY|hJt8QAx4xt2"ZhiʼRnd +fR[Vah`/nYp)g8u֖4ϹC1Kp MZv!Ե!ޱFR26>0J{u 3/Q"5H|v.[22Pe|l 1 qsfV%s"S E.\:d\@Q#) |*Έ)޵;5B;݁]hw8w܁H6͸9pՇ"tvpb|pQ\7}#uWoFuF#.F&n9E>ZI1I E5[]Mҷ$=q`.}㙧_ oq`uz;Vv|%%T9MzRyb^r@k={ Q{~՞~ݵ`,C>^mCY@Y g B;2@%8H0^d1O^dCl2ZXk\r"CN2C֊ZVx,wu;S30@G$&§rXFEM1@FE+=1q獯ݙ9=On s5E,Jfx"Ǎ*I {Cꇟz9{]Uu"[hذFe> .-O (j XE1xq2~/~xvDo#Ys{]nf'L4zG$肰.EE Le @NcBkZi*2R3_.<#qv>t[kٟ=wxRJ+@%4HN/#Qde ;L[K!HH/|c¿$23(\tL}T`NL~/y Kx ޯCaw)ćGϝ_~&z)%=s\GƛK`ꔜҁCJ&fVau4wU|`U>rg?oD M,Vq7%oDlRqs:%Dz&(&}LE~b 4Cs@Dj yGau.J4M\+(gTp,(Ju0GُQBGԠ)^! Qy-qJXn| >y")5#w3%ŽqQBD)d9k.YXE2N&^J8P]nD؉Eb'(f] Ϯ}۩JHu"H%ԯ)}7׫$/5*{Epx> Zީ Y-6oNkjA $^zQEhKGU=v8%%37 a6 2x1Zȑ!Ҟj, oW7Y4Pê k c~B]u[Z)=&cx7i^E3g3uO=-9hҹ{*oEɔni+@h{/f &{}&{okݹO>]iw-?ycP[߫=tv=U4jhwԤߚ_[f߇##4lCS{rlp#6~ p [><<_}ߑJe6}Y.FHgZ8ҺZw?џwȓqbn$ ZCt;J2L(DSqAVSki>o3UHyp݅[V(oW-Ƽ㽁mgvmRM'~o3.hP޵&$d=-Hy݇R+/6]hu_?1*ɓ'"":pЧE[кhC8k_.Zϥȭ I!.5.'@Ą\0F|\t0"k4)'ekG kܛTE9x#<ٝWBw!4/m^V =~vg&;?XujQ5ʕ&A3Z)eoy$I;x`X\9J;gLR'"!9&)%uMJ]DU?a%MPZ :a6v]{!eX$L.4Q@XձccHQjH!h%j` ri7 #Qvc;BUӪUM;@):=ez=u^攓dZ:Xn_tcabVX|#YQ4&sdXs,cAy9'u_+q WW .t{f>,LCg:Nެu7>"/KDZ %7Re>fvBr;ͽ@Ŷg:ˠx_g g8OW\tBi^O׳jwW"ڋ{ۥ\?=f͖Uzszs/H{{UMeF8,^_ᶏ֪]h|BOi$}|:'c JUqƐL[)IlK93=T6xh.#<rӥGh#5Vc4#gN37Eĵde@pI{8K~Ѥ&eqX䯓q+IjI#W#nij5b " Dt ({%kHֻ;?M|?_B{,%glXEtߚwͶnpK?$0W?*/h[Hf.`Ӧ笌(/Wń]\-xg3 9ms78tf%َ]4T&2cqtsr]үNh OOF""n{R<ΊqR k8\26MXE|Fk3:]+K;rMؽE|OHdK:%\H$Qr}S2ӵ~>~QeM^i]]ա3vvZ%|`IP^H =?8z!pc z ;L:S6YBDsk ULtnV}8EWY\zx$;2ڿɏ!\ pkV@rN'L&^]?A-^iɧZ,vZl9p3Z :搁,H!ifHQlur*! 7s}f9pzrDɵv,eHHH0bHDƣH75onƣ6"*B{_ӋVRZiXAo=ɟ~ ~_{/h?&!36=Ia#^E6׾Wvr}+<Š.|`-ZM'fO0X-{kkV} mIgV/J~Y;u4n=9- -ڢ^?Oz[0Luqw-.(:^j/eUGexTGexTGexTGM QQſ2<*ã2<*ã2<*ã2<*ã2<*ã2<*ã w8q]}_>l*l*l*l*TfoU{Պ-ء 'x-t,O0]%6ZmB Tt,:!pw!I/U YNqIb d-ehZpOq}mistث`O^“4}h(hTH%Gi4[@r >,: 4x| T7ix!fIPhw[ݞjЖehxHd&:B\hF)ggLpX0-L(\pJsY2k)U,Bg 8f%F&2x<DB7,x[l79|$a 3ߡa&=|>*<T w f9`QkL6.Ɖ$g`pR‰W 38p@y&P!β.XHmP>( 6\Z[v#.y ,2#y|8a\bLA Jxw5]d<'JM?x?hKPlL(DVYIsR{D"G%>ZhB!HՔєMEdғ$9WT` =̀3w9r=YfbIB鈤Ɔ6X&ˍseLqL>C=gj:?qrQ`Jrhdr s!uv$MdAc:vB:VJ~iE~е:g Z{j8H@&wAlh .6J]slJmd h}Tpa[&kaW.='ա5Zcks3I 21RϓSVKfb69ɜ(w8#o ҁ>>umv tZ3gG}:6>7Cg!+[+:::\JS1(Ջ~Ta.]Eh5~Yi<a`2/?kJD gAJYB,AIKt&_6"1%H܁KTk, $(x|2tMO)]h3NXae卷23٣tZ^%#yd^z`A! AD.Tؔۉ. / l%d(Tc^^ VHΘ#\)&@e3N]$*kmI /7v] ;}-QF4ĻIJeQ%ʒ&cfwu=~լuʱR 9Snc缎ꗞ / |.gYSC#A*Ť n_N|IK!'2sKˑ,KR`;L1C'bHK &FǥJY6R"-w"cdLdol[niS/#Qu2LwZ D佖豉hj ix˭o}.r6+퐜}ڠ:SX*}Wls-*qCX5׈fB'olA`;8G:C0K}ɥ$XI˻O %a|)aa'h?T bƘ3}9v#p{b۫UUHko_YcA^<2 H{`2-\ͱeV{4|b9B7h9r49{m E@U*Uy|F7ElbFFr(uz]Pk)c Rt6! E.(SC8*Ű[)r] j8E4L 4)VOv^b]V; 6H|Vg:o;B})n@?+)&K5ȤoW12#XP1g&p]Xb{j)]ݧmfUIRlpu8xjm'7 j-$* %^FH:0-dFB+ȅ4*l0w4H&$ZB`#!ӲXB>7̇㛛 ٚZt'ӗʠ}wl%s]> xxuD>^aR/1"/bae}000+@c+c"ҝEPY9 vs֖2+4Wt k{]sq.o?;ONϹ^ E*QŽ%!:L 0T"fV <*n@VyZ -:6V=Y&UdeynRXAtNz+Wr(sO]m6<UʫAQᄷFbAroRKƒFp+DuS!*q͇ɩ$W7=BuIܯx )iujP{ɲ:/x`rӿx1ן`< n:* W:w &uD94TyrM0&ZY!d]EnStEB"vc`)^0KIˑ-ʠv>I2 XuTmd\ VVQ6$< iۖ2#,RÂS}qofg?/if?*5+ Nqv5#qV1-yI;^\J}eDjsM;)I0SR>/:|E80Yg{JYr VZR-SHwqDGγKÀRqe.*A2(Kt_*-fn`2)etKTڀ_9gIМ0za9y-Xhm uFm6*GT J %l8q;[,|}%o,}2mi_{G,jK jm _EPJc8n 398͛м(7ب6*b@$8yUu~ڼ~ޞ3w\΃D)v۲eKz[I+Kj3q^mZσ<]Pte-0frjdY*͊8M?e˞ѿQIèM*1kU3/y^Sfj1W[*)וdJvϝtӻM}f.'z˻ 7%ֻjEPuuTx\dVWG_=/wNS/'%bzWͣfSH /J1ҧ$tb>L^!>|Yւ;+>O);Rjo;;8хmFsH)xPT );kYtA}6Ը"xbbQB.z{9S}t)<{| v;[ldgU^naHuor۰9dT*.Kb EwGV;}>t3@Tuƨ|ۂ)d6zա3Û1Dp<[,SveI%W>pkPs$!  9RhZiH!6 yafJ"~$vvoΑ`rfӜ GB(2i@93X~Z #L !^pȴ@JHPdC$E"xcr{$$U*H>ӈM \): ^AtdyG{#Fs0htU"ŶT3f00PB!E֔,^'fѽ֣:<E١]-+|_(N> 꼴7Iw%&wӇB9WWHL+\΂G 0F1`;ZN* ' 8*tb ~ ,.P:G[!yĆ5$ศ`RӂMb̔Z?ҽl*,J| m3}®UU,s^6N{q"ogtrg+ӈOÓ!JFF ]ѹ(%~1uI$`Ns1ə&0-olHvwPe W7d<Kv0lE2ϑm,`KOr*Hd'6ZDuC (WKMiRQKIRʃp-[/5^#:!pZf[ %B{? \(_F6eno ^}yʆeyԣM.G aE)BVD2c '%Y V $WZAH:\>Eү;"RrHd J3p3U:  B' Llx{< )½ p #am*E99bM NxQBɄ"*5K@rL *z? #=ij#HJZ1xN ,$FDAGZ9_oAN1ʜ[Aڥ^H`RHTRF,V. Bz#g V1zsq.`L3w[S]oɖW+z?"tFګv'8#`"%颺<~y6p7Aavv=2>usZIʴlgV?jlX[MF&):zn{_n-t,lXʙ@]|>O/==PȌO6sc<ަ$?bi1q͛zFĹ{i-]4H= Pz-5~G<( ϸѳ?K'bdg|y^~k5zOnY^Aoƽ~cfrn!.?lޜu38_6${8sȞͽ~$ 8<\j:C5ww[5E )XO] b:m`rr~4G0W/es"Ѽ`LcWzDc*~giTC\y4F6E5Hsxsst.ڜݡ՟WxT I6M{-LQuzόs$ T:}DqThz mdzY>3|Ͻz(uP|^qwYUg>'9%g'p[gV%f cUDDT !4cr2 NzET(fЫA tSMW8͂#L8EXZG=;h Ɓ2 kWuT<(V.NSݏ_Haϵ:5 l tt7㙟ɑ_szQWkLC瘊-C 8O`}~Jz݆JʛY"J; B:c*!<Wy%04mًVQA!E-^d`ꍵ;"%d V\.CҊXA,:X:I'f:rpHGwa>V8'nújKxaGN]wM`&@u٫ :::2W@GeH^ )]R Vo*ɺD/-)$3uIgQl*@=Ƙe҄'1$l9!Ѧ Hqv>x5EIY`)+PPPv%Xy#(XɠKim/$GӲ2D+ϙۡyn, ҖiT.鹯dI&*NtdXnBTsZ"\JfZs&)KSg B A('84]*?E큎Eѥp~?{vzьq h8J xu)Bjmb0o'۠wm^o'!{0x-f Ů9xbߜɾaҲ+ t^*$rpLݎ]rWyb}%%V&]!Qr/VLJ J G؉rZKMxTr<2YVĐ 0"TPppUܝxQBWkТ9-fCiEkf7Tr,D D~*W> <TqwUO5 f;*c(XLn U%lW!HF6 u2VΎ*tZpf.ٻ+D#$E4Hz#BJ娠& L L+ -^ $%hsuQ9m^X"%)C$ $VdcrEx4 -c>x%UD嚸dJZjdD 8HCrjP "@cAK:ppJhMS@CnZ1]8 AJ2e/NMނP7Ӏ$*26>P\yt>:2 ?""1\ K2RPu<.xG1}J*}nnKP2"A+KQ*6N+ qro ^*| R |i'o00v̷z{zCh~4LKj&3qtp7ʿc|ʍa.Is-Mi1g}/:]ݶ{#jLPݎms& b<A5d'10#8_E4_W ][_RH,$6pXJJǸDsLT!S$@&gL@BqɍTyĄVv, 7N*wjߝɑWXOiw\T N42:Ř'F:&&NJ&cEѤ$޽ GBÿU^@Z;gXK%ihjn A:+2`jx ]d{8r+JDl =19 Uf"A.d^@ѲÒTo$QR  Lgi*Atc-.i 4):5{7U̽[)Rf26\ ZǸ $1Bʈ*Al1zQ<%41M8j})uaP F77Y ּ5.Tcf?}B3AKoY0M>^ûEP0B<'~'o mי/ [lAZNca̘Rs2i0e8p-ZLl*(뼈.IA_ǢՎ"L׻S@y7(jZNNǼ9N:~B A%+%r X6&\FT3 &It煉|КR+tFoBeEoX 9jP{C/% VM)PB dPmCQkr ,7n)~k7es%ao6|@F?Wqk9ݬOao[u:{λ3ؗ ^]Nž)S%A@jN[輇!@3A*Mb'T(:,TS)$VFz!RV7S"W%\:KobɜFμS0TvUp%!:2D!Y $" ODH<a U[k4OV{=]'B~Qv ^^SƖeKAzͶ4|zzhiwq99'_<{dzE{.*)Db*hКn>`{]:z^|GO)@ PRP9E/x, jV80&vz2|3/)'"W3r}n7 gM!1c'pm.ehAu^r J࣎L2cX)M>Onc;2yU`)U: gg $pQ(O2U u-g?oYsSt'T>nSK!Ųb{KS$sg9^f$+Jh"< T7mE+-KգZ^ڌrCLVS BV+HdpPdF^+Zͽt!$:~P!Ybe]\F Ĵ< " E2X)@q9@ E,X Pﮞq7+(,wT)Y 2P<{A+)w 7N? OPkCs9AJ h 4+r&|?"HG\P[F:DQKϾY|U(1'5! 1@#II o^evLëvW<,\1%Âh/VA|4i<§o>ʤi%e+NdzQKﵧfj75! WD~a}G\Yzv7{\stHf6j,LY~>*> xm&2ӣ$+m#I K#A><"e)MҲ݋Y,R-"%VU2qĖIpNˌp-{g\2q6fgFKe<)^ᷳ][k5C$;rtLd%7ȫ]Ze:ȷ_PطѾVIb+φB{ h莺6Z=txv+XapUȊm&'jDY@0V# xSnJ S>ZUed'^|Zf ifn Znc8ګʶGZAZ[TuJp5M&F({\O,kB '^0x!{\u\O;>WS}Ԏ(Qk9D$AImE/@qK;+1q3A)Q)be2ܻ2ڪn9p[6p~"tAH <"֨ [rv:|B~eZ&F\y5g1i?ἆt[cs=3*t0s bzz$ j}ۦ|JIXږNzF7blLd97t>R։xT"aK]*ua+W|ĕgO, p h R]q9pwY!O nQd%EfAGI8Nh-S{Jx}"sSM=z<&dș;CP*F"V0F;N"`x !A'`4 ^! -Nzw ^{~@[A;H@d,PpYwBlh\l+ȍ6L5oEҾO \q=q'?Y[h :g@ ̐)7͓SV310'N^Ey9{ܮ S*|ij͇ l[߮nU[dx\k3^eʝʱGGAGVhGj<+3mJ1T(e:C* y ژEFpm̢2$ݐh] )QxGp"|mƚ~|fU?s8 zX XX-geѫG*apoC$V}ֶKYy P03`2͇&"4z}F5>](q%# +ݧ܆u8wN#TH2˴-I RFsFS?0A1*# C۴($ߔ*s(]M\I:G]7oL_fhV81R(J nW)J=pkj H&NFU~ۇh3NH`޴i):4}?ՌzVNrU^%"MjciX &i3^(12Ʀ g*ۡW?LCv๬{(c|uDZ\{XݾI7yz O3q|SÅ5&0 xMCb2uǢb}Sl؛ eEd@939zd)hF(ggL ,޻_ŻO'_Im0mL?7߇3DwG11AKÎP| WJ N&aR$h@UDkWxՃQB(䙾Z(lA߀bBͼ qޠ9#P`uKQKI P a  TؕscDb eHч8rYJd.s>'MǮrLૻMqCURIoo[hxkNacBZF*l*Z99ה5o,PS;SƳvNڵ[{[6-U=Lf >u2om nn棷OOy31r̎WanȆF6?n'myevzF[[^dMޯvÒwL<^[i&ԓXC[!KȣƑ~IbZnm^6{⌾s{`ܧ臭?_^LǰRhRȹRrr <1i9v~d&35`;w+D;(EeQۤ/].U1֯wF(?xl%4g { _ܞ`^'6zGA$Y^"A_/(*)mS^Cފv1FIѹrV vΘA'MG`Ձkb"l%_6a)hᤒA` vZ͢x9imr'Sh%]NJz|h%-ܩxGyjTʻSt`lҤątN*L 8a}Ԣ~nm UښRYI ؔ%$z0Y&eDl &͵+TmXm9{tP؊R# kQ#ejE[zY,wh5\<%7OMcW͟hp82z.nUrK@+,OEIm!d1[8 da9[^ˁׄjpXV@d2E)!#FM(.J,籞^9Z]ֆR#F )D Qs#H@\p"DczN9Crڋ0Ehq^EE0aa~f%mY}Gד/+Dج 7Bh\1).Iܼ}&9=g>=cy<\{WGf))C=O|GW{P;EZڰ*^mkȸݠ易] ^ʝ[sqebGJVwԎ?r?Ɠێ֙/ [lؠJ>4ƤDLd*qYVj1N]"j}=}5:߮xjiWᘷ㤂o iňZRk 'rI2yB,HfVr#:Ƅb1_j~c&kgǧ]6^z)eIܯY %>89jj$Xb> c~aJڶ^n=j0Y$Sx2d"8 L?d1tS퉝6R+9zNzbw}<+ ,{O\m}p$HS`SbOl49_޵CsFz 8@0^Z6Fz`z7;fs>v WXY>dC1%M@FfcH& %3ru*8yQ[r@fPP$O<1UWjWǥ\vwnufv|zئ׀lWq,KtPݲ\_U*ؑe\s. 8\B͘MCZY̞N7O)B]1~j&;_f]ؙ G(0H)Ā+'.'!xbHyFdVL2PNE.Ks2XVBTA }Xr4+m$G%ONG20 f]htKJܿ~,t٦@LI2#_Xf ?p<+ߌ~[*2o~ĎaRG{CTVLZ0!Iɘ# iH")Q`ZANIЊ;CH_uP}p ]fap@L)D7ZQ#EF ~̋s%($f;2!䯙S*#n5(R2&DLQE5\AeQ8m'0=jk:@E^* DcQh`V{0GFX5pX3k%~I U,WkM\s88`|>(U ) ֛{XX(b@jC+Z_W>Tj9k0j1P5qfp;ƮGQ&_G?w+([v0sH6]~T?ԬiGMJ"H*; է#gw5{dfPy?W>9_glHdǩݜ$[_$凪)0.=P\2anb}@ :w6Ϊ5)N#JSgwtʍj&UF36̆|y3^~j'Cal0-pSV3?~hoGϖjgO˫荋 |X'llrpE5{H%v5!WJrNM᪙6ww-~kIG'$:sY]̆n⴫}Nd;4q0+l!E :3 ix_/\ǯ62P>3y7?#Jk>Ū+XS^6'-w %|XfyH\kԔ 1PKwQOCDPJB)fTAQk|^5˙d TkՎ" Tx㩎F&-qFb<6!Z|AveN^fZ&25g1!=hv>u$ViU<-)UUN8Nգmuxz ,OR:1%k0׆^{\&u.^rgOu . Kx4HR8-$`OWIolVCbԚq9(X@\cK g?\8^LÁ݅$24@KB r@ M,rx6ƀcP`޷@!I#dC=F;a#*¨1%B0ÈYՎuDLT'볷cN3脗h|6M"Peq+awQ.5.%jd9!#(bQICQ qH/睌 iNkw[l ́g2*"p4GeR Ю 0vJ8 *+"@պ vQIpEiU΂+f.On |ʟܚTl1[ a.RyRHVK '%bS0[*p4/3瀓5tI/U oTjlq6f~'\}O2S{tt&t9ˆ㣣dɧ-⬊p$>DA Țik:\-gL^V zf!QRBpƒ*4<)Nd H!ɤrX#B".eϬ@.bIψO9rdTUn g?$ZB߁W)xW{7YmX (ܙPM>>|R7^?OA1 Z֚Hx8 Lvڅa*轐փpܵ{\`I$ I z6HFy]և.@C 5A)΄DU s,PKbP&NǗͻ>ި(j4,/coKOӕ4B5:R)#̣nbIJ;TWbNFh b !ꙷv;viכ~ɞ^Jybx7^9~#sHq&yvvI,H3q2*aʥZMlq5 J-MF1|2\sﵘpm_ &D<2Yk(XnT6>LI(%%3l ]ʍ~"ҵпi-yC[%cSC@LAz$3Ppz6 KX.w9:8 uj5"V Xre$`<{ >ȸ b—Ke+!ՙ hXqHuw?_u_$Y(kw&.Hoj^Kb{[b aSBi^1/9F+'`E( ?E|ni&ruPHY"!(O!Tb90z.[h6}h`Ѡv~'7nom)tVm<ؽ1mJv잮]ǓY'%<ȝCkrv6" vw{+)cغszzwfO6{l;Ė5;j=]MϏwr7=L-N68Ch~B 2ʆG *IJdd|Is,rNbbN6#@q*N)}>E;]7]B btʫ`D ef.Z|ꛊ/*n@PI+ ,B&θ1\ zrZENQjBH-,n:$Ns(q,tJ`U Jn)` ҒpGՂw%dK@Yjv-[#772m^y/髗XbkK 0yQ*BIAqPBȘTU[y*{5Oː?1βRy NBPP#:W1DWtJn< +HbRM KmKUIHeӂ*(yT(B$a"^"C IQߍ6oo$Xck5~5%D*@E( $(yWRf$eTGVQy(c[ozN9crڛ#0EA_7qƼvJqV\3^hp(}Gw/N ($j @V!8Q<{{(/GE\ߟNªz8孩L_Z6bRWWt1T#-U[0Cޟ3wNbRB[6/.D3L&ʌdXGAMNZ{_o0/g8p3Lγ/!|\#^T^(|(w.ODYe=|s|sϩ{{ ߹qx@T6w#ʡwN8Se{=cy>\Ӭ{Wgf)N'>t*tߠ\ n[Be}Rܚ t/Kx9ln˵" \p>Vdyںp"#tAsb̮ᑇԹ-|x,A pZvmaR9ti8&Ztl) V6<G_/E_UI̮wo[O\R6]h5M!zݻڟ^3-Xگ zߞ zJn9`t{S2v6q/à"F«$S?PMdzyӀ/8v557p_>o8yvCoƎ]ҳ\(!8-sT/x {ռ4h^5a\̩K/uq4 y &_vpT51QM~j`0Ya`2@nh y҂[oen/K_ZQ7/p]u닻wfQ37_~}'ri ?{Ƒl@vcwo .#G5E1$eG^%% KS]U}J{. ofGЙ)JKLM?||+Ľ& U&IfY&sW T^z.X5}fPF,֔lfWBDjι.:G}.Ы3˷\q8`lfC/48ߵ/Tz÷yqBF,6Ϫ0.5X$@85T%ϸ4$/ Ij=Gp>C $ !`HFz8al'7=g ೳnWR+/z8 ^×?vH. 6GYockuk!VZK%0"=23z`LE+L.N 0|(Q\b1}Pr# `9\ `a,삅] v.X `a,삅]]D]GR- ` :v%X `,\Kp .%Xz.%X `Kp .%X = oh`GģxcW:x ca~,V,0?X ɎQHƨ%R?V ca~,̏0Xݔ>̅0?0?,Y c1X ca~,^0?X ca~,̏X ca~,̏0?XJJ Zw^j-%5%[4P(=<%TNm0,eX C 6P7@2m%u4qZ0-xk#Bc!DiGP\8j,ߏCPÏuy5?B~Ϋj8ȍϮ^4@ -eA2D!Y 9$"MDH<a U[k4O=qFcEפ'8_~˸A~ --V4zW.CK\MG+O9L_>vA}VE C\mR5 Z3pB.+>҅ي%O@IAi:W)([n!lPR%o3X\l :i.;GZCG7n/ 0i Ea 3H5sJ; :PiZL?(Y,H QK 7 ' H)M)iQSg`:M䘍dYg&5m.O0LC}5* 3=ۀ`L ^ALG«LZ6J4?kY61׎RV B=MM Z 9mH|tDQ[ծʪJvt;\oF}~S}7]toMf%lo5+՚*7{Eɪy"8F \kfsgǺG qh׾U_M/?$囪ɓGL>.kqe7^?-s7g¤3F`轫?iѻ>_xzR%s)H?gx/qz1bc xMU~5 ꟗMda2V?^-Y;7J3 r0J('|Hj9܍p .sWз".K6^\oq {ש* &RPf6?\fzx5z蚥w-Z<鏳oqTuJ {=R#[u ԠJ)BҦmk$CNtW_.(pj+oU6W;Y=pNgQޥM7;F֗ ^v̸_]V?z^ךPM3͈mJV}1Fؽm6+Go{=DmE6UB'M@"ּĎywhKvjYVĐ 0"TPbN(YN%pU=S#W?UOzޮj|y$0M= ^lx6Vu:ϭ <:Ն[N 387QI 14@PFp)ER9* ,seAŋ$$u..*- KC$Ec#dHsv͋ԕ8Żk~/ܞhsLIKM2^iCˑ@mDs, (qVg:OR,t,%R ¡D@1JOՄ %Bq2ˈѽ&Aǀ$*6>P\}=udDTҁIJt`H&1䜠}^ä튆oVn4%mԫWn2qE>b* 7b>1k5MAK|ƯoX?pu=O`E./zoFZ?zb/i5{XZm2a-;!>!Q(9< pz7L"]#c54a馿$FCc&͢f|.6=^OJP+mkP~gf o8Mng&ٳi_h8 \\u}׶M<gwV9*"vL.ۗO?o0U_Nyտynޭh0nq%ޠǗN7minàONj E?5W#UӠ/gl?߮L=ƺAO.MMvgtkU{/mAM\ʦ E勇.Y-^2{,e>7OjmpgjrPfj]X@gQ-OhEfR?0˫òOSq!5#f9|+ כy.vG6ۿw@&9 I@I[8@=W4(ɬqs94Mz>N]2|Op9վih1Vzrzm߼_No;qS,94h5-4Ib>,:"yCEDveyA p9q$yY%}9Mra_~Ƚ7 6CJJvVXC 3Bu)m>z 1P q޵qd׿2ЧH~6`BPO09pH"= !5g+,kV}S]`04IfԽs:EʍZ :Fmey|| 1Ј/nmYiR odiun@=F$\KSmwL"iARN›1-.ot.4B%$g0pf[icVB)mwHtrhsBM~*)s;脊[L.\QeuK>fg+Z'a^Da-.Fl<0i_T&C*R=G^%cNօM,.GJԜwϛ&H"hjeZ<'JIIbE1\2'%d >„zoa|H'CvnE)S6)hK 1¯PZ)DN^D6:ڱG;S,ci,I:'Eէ (хk§tp}E DnJ3HTG.I{V JBu$(x,;vDiyˌoh|z`rJcϨ-OXUED'Nk!N0vu`;8V( I&jeW+$htyֲtXӰm/b ~JcSḈmB7JFˌ  hBmGsW4v=xw+u zZD[@B]Q[pYV Z$d BXG H rJc941<& 0tbG. I `L1R"80)8Й5TX7Vr QtGdUmj(lDC@(;`WfҞ5ӊZx)(`~G6ZTAHv`W.{/hZëIQ@IARV7gT0-(^55$Die"Z8(VS Per>!ʰ*4Q]k?|p.i8 xxuk/z1;xH3fD' G1!EIJ!䄈6!dUs7ÞH olJ#+J&$fHW=$V*Xȇ TPPj7z/3~=h5'@i\@2QӪAUQ vP16|Oy }Y%@ VxQ o&cHp+KP-WwV4<ܶMR xY+N";>nƷgywpC*Kt]H_>1&!GuSAʀvo0J p[O hg]+u 9.z-| bhwXC-D;)f tJ߳< ( g$B25;9vǍEQpIcfl~%dl%VmH}؀? J!gѝ0Mh%J"-A*5.-z!u52ԿIn$BG l`_i:hIUTם^ ڪmryXqynf pN'𚖎m93IV#FP]lJ''40z#In)w%[YIiЭ(zךBr$JVO ]rpDzHrR{R@s8nJxKT]Жt|'Ѝm$IUfR9 n ,3UbFHz De7kzD-ecW,BĿv=a+Bq&SK \sD Eɟt7/V0 .LB)E#Hmr3k?=6(]k]dW`>qXM*J TcH }RG :(`^ +a%KT7yV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+^ )QYo}QQ:6^ d@/Q hv@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J? OJ Yu~@6g .J!@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X b@#˰7J 2W}Qٟ +zJ m2b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V%WKQsn=zaRm{qfA_ݥZ"miI `n{#\ZR `q,\z ¥kCOSrJmq ufظ^gfoPI-DU80efX} wR 6s)*Τ8!5PfToъ=JDIsd2ZOޙIN/0M&(]^M"sUZsOWd0qA~&2 ~j4~Vg=ҕtСXy V>6a&a_~ǃeSfJԢj^s4Q]8EVl9W7hR-\{= ZCפѽ؊36TPATLI ^4ӷQ] Fy| DӇ勅#8Tׁ7V/﯐cց߽y_o;C wO ?KKV{`ޔ>2})}6jKYiXh9a~=uX?֮Gp6E`l6cc}ӾW4t^絽͑86yW&ݨ[܉i=- ?/(J}~\Z}_^>`Oh7=u oCC/n}9<4;[ 1q:և=j.657C;itSfW繸W4gmiaTUWcRMXMmޥ5]Ӛ[!V x=1[ >Do2 w|Gsޠ@uE E%ٰٗOm1;uȖ/H UgF'bWuc~be=T!lu֣A7]`V8F%ۏvjIV?k[|q~_Gg۞xLK.nt2 rьz~7_[zrvrv:eme;0m F̤WI6QMU>Zmfƫ;sSo^[JJ*Y,2\Huz%3;N"u7> <p_&x_SiN ̃sn.,[Uv`w'.E Q3Tط{vЧapkΗ~"%欑6JkvClI@u8-up =Nx|'}bsoTH';Wxkpk\O_Yr|rƈ=4vnGic=t~MͻN/oV՛euRλ{dC22_ۏy*ݭlDq}j QrB&5*KFR Yڅ~b #x2sn8'i_68Ɇ ^K^K^KyUW]>s*&#\ҿE'F  0K%vrfyu7׶Y{,&')`ͣqBPydY"A'FfZDjYJHGW1lC{-Sp-\kL5E*7>Oy2sH/fV%ˊ_-NJ՛nG^=}u -?+wLg( e͂*LTѽktC怈4voMo;׼ ^g-1Ի¸PpaIdfk9֭`*RZbwFө6 *cl tfJT9$$}QJ]ժ(tE=5mt)G_MM)H؂0 'Pf1T~%t:B?e1|qB<r{Րvתݳ`r 3f-?rV4+:\h}7[>_~\d?^j'>]6E)m_fokmFEڼoA& 0ؗ&VZIvwob2uOnKU*V{y Nj7K h "CcsdN3Jc2ϚyBk>߱jiP6Sm8he(v8IYb;Ōۋ/3 4 4I)QBK ˠiB7zegD,9cG/QSq2Tvtb:0 i0Yd'#%mjff >g,ݽ~Q&(ҫDrqoW swWgOhr J?% &R:Pmjrs~ 7~oEO~S˜u05p5_p5GE3y̲77x^aԌg sii]1є{A[aqx;r_Yݡo˞%߹!lIG? (+$8RgOntuu~Y>23n:w>s"͎ V5J،.> DJv 97( Hbvյ6٢K ߡM8Mb X:>nq75|^,-eQhpVb&hS6[]Ud>eŧ.,.6F̶6kh z+\:h%Wu"KKZei5Z;5p%d^ EPP4dB%Ҏ֮:L~`睟+c݋CTA(PF(ai8iRwIJ4tQ3gCT hUV1,Wa^>Ų!-Y rw1q>)*IJ]nVbU.lԀFLk^`&`/f p)0Z-fPi[ɏ3 @/[_kI_f4d2d" J[m D5g̳Way!J>黫dQ$4s54(Be1`&Xf3tE2խ 4@ܫ`Kjm,C昒1T%yD76EjC"g<zL+ndaI欥fNFJ! HCJLr)Es<h+4AZI9I b&% %j Yѽ1M9 .LZn92pj9xgfU{)=b9I+P喒1LEx")T |*VHF) ҨN!kb,"yu08텠uY`?Hz%\#Y;2@%8HtE"SnRXlco^YDV GMDHf }3 Y+jZuU ,hL!Na>!89Yyri4ux;xDQrW<'@[::uĭXjF0;fqqEח*DQK*;WK̃mM7D٫5yeM)gTD",/m7ׂ#U kj )Ga]4K*ʐ 5Ƅ̽׼Y(MEMjl7kwvINș-l56ӑ VP3YA9郴r8\|&M,3AGDAʎ bqX73,|02\j+3홄sTFV)k12x7B9?6V{9wG?)ׁ Ĕ9<5Ii]I8 N)8Ajmb oYo$;[M/^C\|X^+Iw}7'oX/lOb\A奔j=Bbm,ZH)Nɠɉ4׸yb9CzDz&h0 4AŇ̋gF93*>ASىPKs< spPґhZQtrVA9JRh/\ʡ&O4;h(^rI!Gi488Bfs֏6O#>Gl׳ )d2jdۨ% C$N@HQF1d98B_>yCY1wak֤:I̔kTw=|zzZZe"\'G%^@*8mVK4bgڈGKrRi7X5ƠuhQ/xDh8FJKJJfn(lJ>eLc:3#s;+xJ{*G쪑#fT7X]2Xk]Ѥn(߶7ѿx$>5p/nONwuu~Q5 ZBB:r{}T:9c ʢR$5!Y["B*kTT28.t]<TfB6$nb*!`赦&ԆEvM#4\+n'O &ԋbj5ڛ;g룥8RU+oӵ‚JoC kL8CW3 l_͌^ڪQcS#&f8JM<Mt)2*ښ9;/hF=urp_ĤV \=MKrMo0\z/;YcGTP`r[dQisR+ǽPQ hhT)3TAi0"nC\Tx/jS(xQsuFw& L6Pj]cA?.9碵qǾZUڬڭ㝀$xFĪlBMFUP ( LeT%٧>Ri CFːzA$&&R&(4QG ,# T'\e}Xq)E1FjD^Y#V#qǃUǓ(.gKRMh RGm%jDkJI`6+DE胁bZ$zpA>nX:Xg5.W/zQzՋgW}uy|Ƿ)˫M;Qzms/߯j-=w|?RPh> _I5+ڂpos܍9l'#@'6At &Mmao1 y/7 cNՅ noyȻ@P5|ֈًˇf}Q&b>Vg\ |wɧKTJA5Ȧ3>&ŷʌ[HD8y]?g^oFv\$8ܽFzu}{K"?X9 ̒.k妕yMGo<[y. u)? v-ܖVֈR[?4ir߭dV0[<`>[ ]#W'V(,V&^Ӄi Kzz8vQWNkw"+5t\٠S3V˝{o5~(}Gng?7eEr;:>B䆍#~:0 c{ tѕ*9 G݇>3gL,SxrfvB!q1 lƒ1 =mGT杺a]a׾ri&ʴ +j8Nx+N*F?\7YP2FIJI[aK< 6fSr F0erICjω-1!d~x.u}x1MgκD虧PJ^^A B^PeXs1AHeR* B{ePR2V)#`vZꪐ+̥BD߮ Kš8J% uUy1ꪐ+եB>{uU4p#.y(qvAfLpH8;`:THqCE?(Ј~}F&o¦Lg?QOfa|]7뗶ypߏW H@KgO/W,ީ_woXf.kOZgmԒ&N@H@1d98e^&\{X}l*m3%BOYE+@c ljqA, hrxrZyqBn>'g%]!qKno#i PNk ~s ?m]!%5=Q"3ԷHd]/?ܧB&L/06ګM3tOً1RF}?{6Y)]"q< Ă2A9.a#Y?qK+jٻ޶$W+u$, &؍VL IQ߷^H),~TU:]]XZ3o{T6^O/O^Gm%jvplc|'Q\f oifzVH6xnw&/41Cu0ө8mOO}=ix%[ Sdgu]t|1[qL[ z֬ȫ^_BYz]g7}{ SoP&o1?ƛF'۶<;@]l"^+:(ܡrT8WBnb?V=+7vt iWêjpu<^y2-Z-ǁeCI-zlr~ZK+Gi6_DzJЧZ ۯ6di;V~`| Nk5uz Dxoi7| 6?\Pq.a0m~MkyvۣWj[?W ̝CLn,7x^o>m6; y>c+ݶ7MpZy{rs8O(kB{wvidk-ZaNꁃA9Ȏ+4GJ8-2rB =H?h%h

.%ftIWxx,/՚+M39 ئBj\T 8#E=mb]k#gUKMdy )G)äX'P J&J1)7g$e'' ةÜ] ح&0`^ +0@ }`"͵7dWnS1@9 *AC LCtڽ2GGrDPkcQGMq` ?D2a"Op8ZLB*6%/D>WBs~ѯEq^Nmn)=@4m.U1;b+օIQsYڹZڧƢLi(FGτV() ,pn Ĝ~:X,Юːx/y!0*}ᢒy[;5@q}ls!BufË^qkۇ`ʥo`M-NvKʥ] [|cobޜq}) 8/EPJ4?/ݼ c1cPˁ6=#wWr: d9(1Jqg$@c8\z=X4^ws0/sz&[i)e^)dv\%㡋 gkTvrZo bRS{orrV˧~c =HFe‚Zk4h@Ō^2`t(Z٩\7Cڻ}lac۔U*_`r#޹%`Q"xe JS.WAPF TPmh42\"ʖ?DrY#Xq{$%N*ОXT-źYW9'qx޳կ6X=O[D=^dOQbv"" {s˺>?9ԏ4&UVVe3jAޏ03e/8()rсp*U ʛ"sjut!kVQրaQLCg[S19m|NؔC.C(H b^fr@Ž{r07q@ynïxeMMkY{-ڏ.e%Q{JRNHR2:4j$")@):%eTB+$ :%Bs4Y`ڹF;D,y4Uܫ4Hps(ԈhQd"krsγܮX  nYd?NY)6⯩ĨY^eBJDHs@,i;?坴?ִ"9fċȼ$ GF"$FZA=_Qn%^sƽ"S9"27KkM\Ɍpa\GC*׸KWWhBl_gbX\PyA4 Rzy$3uo\d>ʢ࿗@[͎oGa o{?6]l>s/5S 6_5P=jA}uft9HԘ4,^o7/_UCnhC׈䯣a-n^CCoQEߢy:$8h{_h2>~}=[9ĵ+\zv:u .jYMVe/kMWWn2~%`sU蕓,$̹{4{~$0B/F❒v27{16ws7_K-Gۑq`=!HtByt\]!D^_gkqdv?m@5jʙd9=VQ?Q7"{uO:#f>e_Qhsum@H6sSؠ|^n_꫖qwPds8aFG:1umgOtr[]eW#KKNqw{Ohup{3ƽudU}xMhJ |n`o8w*G J֘Dx !R%JcJQ%pH߹AOMUϟ =n66Sn%)Dp(~ *,UQk\fODeuΫm.6O@]{hWLO1ͨɵL49s>PQPACXq)Q2*}.ŧ-zW#8k)6DsQqyDAX'-%뮒JF^}1MshwWVIT}/Ҫ^d^km5BVW`7TGxX+Uljep񹌜[|^3vнr[%s962y Xn>@B(Em,(GuQZ$KMV;@@xAG#8Dn-Jb,ug3-7g+38L5%g1a?&"n.sO}+͸U\{4 b:z"nP)z.¶5xB'y$ܢ%)P`F8bA ƔPb#qw4Y2H甧WLjT`VrIjݹplN^>e>ߔ+C1^"ݍc̝ș9>:ML*^|fx DA ˽՚UW9pmL;n:Ht$jG =xd!QRBg(솥u@BdT9E,FDLϬg@.b8ē\xJ<80ZksDRhaHT;!ѲzAk JhC,hioTyc4[-O+,]a?|+Dnt)PԣobIJ;tWbCFhPJUEw^;M͗^#t:cT^={]o;D 7o`tm*ߏ%\`5#UQbe! ͤ dy1]9Nxd,SDW40IJDD)I/-!ÌޱEXn= B*mgCI]z'(ԋb}> d` rn!9Ny5%B,c]i U*5"ʭ@H[y=y.A J3_@F ,#욭T'TJ!>}WG%bEUT9_x鎱;{(j6%/Z;JLkr\q [o]Q.(E`G悷<M$&g rTfug˜ m~ \`:eGOf9Y;Q,sJ,JBI%q!-C쩍pwm-$67_Z)jOڶUdSY"PO=_jQPH? rUQ[s*OFb\X. mMB͵+@62Vg",`V"9x_[~] X8EeD"N8!8Egc^kHdF1!@IhuW iȱ.":7NpYJ3Ctdc$OZp"ڍG9D:..׉NcuV%⢬r v<ӊ\ɢ@>bL.̙4=YBLq56"9TicX82axx ~87C'8B]P#@FE\/`V7,RZ~F(]={{n.S֤|BҶ͊Y0BbA \,MII<e6PIE=_8kf/_hu>4~lՋC]5¢#jY~|,K+ D4rTӀۀgIp g[EFmUK+z^H>0\r{h=nR5y`M^ﷆ^J׈n*Mz3(h=/Rl B_%xG2۳E#v藨"~i:iLzy loh椘bO1MFcp?f,68{VA]OS*m"D v2CfU&y.:P> ZN'SJ O>BTLr.$P$dp9Y fVr#Tr ":Sр엘3yb,K8W6F}EhS(W^^O7s3Hqe\͠"s3 "icT͠x3Hiޠ -JZ;K+\K_\Ӟ_)pxMzNeDR_jtߖƫmjB Xb'ݷ7tcv7;Bd);'=X22M1Qh9Lt-~gg-@SDsW{zv3-eq{Sr'Kj.ζOnvq\󸢚qNne,?u!N;uSQӍV6Yhx~Y7Q?D倏J6I(Q7OQJA2:"7-d5?/d: 6DV`puH zt>!J.PQwژE A5qvSm%:AWqӫ9m's&#VhC~ $(e\2m g RFsF6DN$h 7.d0Tm0*s(N .b@ əH?DQ%0 }xPż8?'Lg=~j"VLntZ=W GM+DfqWk-M`.4@Nmy6 39LATyc갫X:,Q<3 @L>1%%a4#.ggL ,Pʡ!tÝrH6l=[tMVpGp m_#>G ߸wSbnձi {^oVnӟȓچ&6blDĨJ~3V"gfa|Qy&ĆF`U^D(m$fXehRp*勑e8HS2G퍓p<"gS)i9@ ^/Vg7_˒HIYlÜq$Κ/bjDhU ܥ }{!q3ˌj&Ξr᜙b~w>-[^<﹪\?1jm$wfvԝ⡝&%h$Ef ]]Ġ?17`]snV+vد~jusm7ZW3?uc=p?̷~wwnwy?9sOwW"vd[8O[sCurտƇS vP\Mm; f|de@kVeg  976.kxȤ婫6e'qRw>կۇwiÿwëѪQ{}oak;g˅59,1a~~_tߜT2ck[1p#S7J& 19WbRi猉4jn5̔|&.&8klDr҄űaq(Be<V1"/,q(+nJ6V^X>5 &;[WcϿU~E>_$_ m3}'AN [EFmUz^>0.fՂs=n5Quek[C/%kDQ7Qf&fzs~mЛb'Vu.ś=MϿޞU-yDro NnSe˿'|C3'Ŝ|YOڣWgVI)NuG?N{aZf]Bˤ֦ͷajC YݠS] Ƅr[! +=xUVi=ZZ  ,̒v͑Ea/GW.wo۟m3GK\,^%Hz~XRmJ,q,+{.U0Edb_n64]}}9h |(N'b@/])jRk ']I4I0sLɳ ͬF]oGWe~Twnp $8.яj[kRHʎE*EQCZ${UտcL(3L왜b&k'7ׇo{ ~a몃g)?^P=ڗbj?s)fY퍀c; fp~u2].yu=xi*.SL Jɾ>\÷~R{;4*aZ6NH.:4DYyKC3)NκMyy«t'gta I &s$tvrTk#$ϧ q:7ڂ:0vw:o~Afٙ.fDLnFT o7#_FdR{ ]{]Kӆtu3z=@zy6=K/V̠=$]C.Yw߷Ji!B,P YjQFk'p&1"ch -6GgsLy Kߪ"+L>r )#ȃZ#c<ˀ+mI1["2~31.*lt5zNB uGfpUij>a3 xH{Q_:192c>veKjr.Ko6ڙ{mܓVs3HEc|5[85+}Zr[@\6o{T*#-r(hk *:D0h+20Z:^t됎gy`jՖiTq,kHH23WeY9kdNg=g/qς\'=C:dq ]MO2v ٴ- v t:>#IOk#?;>kjf~q:)=a[ny4F_*{y9J;خ lCvs L~CѢ*8yQw9 Fa3zN(A('*GwyXW/@GWW&nPo(ԺڦVn]oWTrOA'oɉ؃Q)2kR-X-qĞ0ը>,3zjF'5lW#7A`0:ȍͶ"=!-Ai=Ro/ƓѧnmoBٿb em6d@nxdkㅬjGey- 3#gVY,ePR[iK@`;+1q3A)Q)be2ܻ2~Ge/\46p~"tAH <"hN Iw٤|hȹmg-nZrEӟwѧRSk1\єvks9s>[`.WWi0 zǫV4əY. JsBB:\3nփ]B:fmEn!: Hztގ֘3[L픰&\9al4Lt9 dAjRr9RF oj LC e"r!XE D =)%99nR9̵Mz ʅҖB3?MGZyvޜ_с5D&m?K5˲DC5Fe 2IX6Kd >HH%a d Fk4m^m܌ j_y +SD !schDβlR"XKU$~AXy ]4/k_nV-"犀 먓GrvUft Tzϵ5m"HAz)E eԌk.{6#9(Ϣ 1VAW1!K"-ih- kl,e&i#&&XZymqB+YThÛoe*ǯōs1~݌qaqܹM 2c }\M _g.=~oP+S0}߸1 uoԼO] Kaq)sNc17;V 4 Ю”%\_u~2HM(C^=Z M:ˆͰ/-N]%#8B͟||_.* h|#~uN=i~̝g%bP["e7^$}y.%bq2G)Dkٕ!$ų7.OSn+n7&8㒉kvh)~<0t:6wy9}d GtP&2cq5d9qCDyXM8tLb|wYkm:"˞'q 1,>, MS&-n%ߋb]۩L>_` {V _˖zC0}GpԠT`+."8[pWlFQecfΛyGZ<݌KK> ,F asLOk\ft&l$dsAh ULj=t nNz~?DK i I*wd}C`D. ZKˬh[51qiYC:{q6Txbڋ(ͶLzC f_ֆt %Xa:x.}4f X VBC/X]T~d/(f9p r|;<!0u9_4?j24l~wۆ)=[Ii4.;85M:p5]trcεgog?OSCtbup8!;?a) A)Zybd`)s-~X*S@g'h)hg|L)jHN+Cb!KtAp)sI\sܝ쏝M͊Y0| ,([Ren2\[,c !+<"wø)4z!$?i r ).y5_4W)x|2K${uńnOkxͥt5\KWj.] t5ͥt5\KWsj.]ͥt5U+\KWsj.]ͥ;\:\ 7ڵv]j׮ڵv]j׮vkS,2N\9V{՞cX9V{՞cX9V{(PVQz49Bǒ]5ӣ -hw gk9:mh0cdm==݁'zCI=t+Z^D'WIT<,X6"7-HtPgǵmted(Kh08&zF]YdFtt>io4.deG, Ȍ$xfro.~666Y :8ni}@Qx{ dy0zv!k7NJev.'}ƭ'X\LQ ջ Q2Ct< m&(B'@՝BBv|We\V]rվٓ}#o4OkKA)97ﮇ}r2B [ϳiIMN0ܫ68l.sH<yaTȄxt?3 Hҙ=}hF(ggQ1^^ܤ% }Hh 㮴s# =] 5&2sO,TLMeB0RPW(T.ԯ!^NeZz% tr7@8"p1(Z,yZ:^U˨}Ia w`5zc>Z{Kjf@Ϧ<~륄(#BSF8]&x`J#E$53 }_5%(`f$vuφipp6UpDUiC V3ˤt)4ƛ`ӚXmcP8k/d(" ƢE 9@ A@^Pcp[*PHV'S.ygCOiTC1p0o!0MZ qKoUQn{QϙomIyp ^-۱MvovϬJvbv֝>|ͬOmvr'h;m`ܸ:L R9WϟW7ջMl\;^5OW6mw7;/޿y=`畕a2oquwog>ǣtX˕{'{6ǣѬ=60K\']n y8h ?Lv3³&lllF OZMgsٻSGST/LvX9m.DEO$(0ꥊ"p%d/moa}ӓ/Ww5xoHDΰ<EeQ&)ىΥVJǸ.ۢѹT!$@&g{xr%7FPeZJw*~BIkZ uskrSYATlIJ^ߴ-q]mXmxQ+/} MNF;} %6r Q0։6*ˍQAA^ڢQbL#  YP&=mޭ%-\|;D Ψttߕ27Pf%J/Gku-tu„U♉1Ti\~(ҫɢtP+EsÁ,$mKLݢo ]ǫ\l}b͓*bċN:lL ?N| {pg?n&W;,3'K\dl9:R È1gLhEq"ѸSVUZIվվVλ"<hQq8rwdpʕTXf ,r {x.WpK$D08%&2ڣI嶰R:j/@_%C:2Ɍe`@b69TOeR**ku3xf8ZOU t#bz<6mIj}O{m)mƶB0#uyVF5ifC2MTTm$ʣi33c{c@HP'hLZuZKS$W  %7푾HP羒!BmImdwyDrzarG-r˃aD)',c@b)R84\"JFs8Cs(+4ax;EUcM/u4J#1%OAhô:iCA$c y#H@q)@ kF/{,={ #!~7+("X -Sd" 3yTS][/8{sVD&6j,LY~>,> xM&%~H ?b[ܸXj^woqa}܎[3{_.a6i6!{.u6§(3 xг?KYΕfz~4u/5=Bf7$Zbp0 7/rlCDsՂ$ٳ7;3jn7(rv甇37µޝMg(`h8# w@R l-{lwxW~|F)|dZxx1Ƶ]ѾN, X!x_W3*5r묁Dɹj fiw9+;(卞6޹&gK$)jpJ "OAJ3e<8QZAn-yX&",yo4c @c5K:Dd^_AWgsgsÉ#îJ4%G&X< .ҵ5`sbӡL>dUD/GxWaz,ehV+4bѲܣ:V9pV5N.N!Io!"- >k(IƠD :-Psc!T$LI{ueSSw+vFp iZRXn^QT)%!A2Zfhef(Wz>T[Mllrwފf`. kH [8cɢY.(VY^reǭ9fNRcDQ:S4`c6Rcxj2@ Xg\Fg}N1 aTV2QT8P Q/( qeCY \: n$?wdדZ/Y ij-Cfnk*1ռ~!i #n-* S7iBcyM˩6( J!QewmHczԐR-zt$Ff.~QAM&&P o%- $%hW\[/,E` 4qzb,^6J_\݌WᮧXg7RET3(ȔDKFBp8 @Dڱ8*%.ZV\[RJ(%RZ@CjZ1TCxi'jhp8keY< zέD@j 93oSMH KAtIFʾ tBNcU{)=#brsŘJc\(J$ TiG460 kAYe:fgPu;PB򚗭?hm T"Y; cj{ۺ_u{3-C&nɕ`^IX/)[vn.E^C<t>8B-v2j9p?nƶؽ6.}/-krC :z PJ>OW.?9Siku EnnFoѬ΋YWn2<ˏҙ콇|C?v8'bGR[eώu̻0 ( s bCWY?PP`GD=e)QAct.dIտf BM$,2A(8E)%H,$C&Q}i  `IFsQU! B՘53g?%B/V&lx7yr5 V,X݋@}TY0E}u8Ζ\z~hlXcēŢdԨP4ח|kø0?rHۍOO? [.4;ŝ0 EBgO64зaGiw+2dlH`}rHfT1E@ݛ II]%B?mw$VDU4T-)/=!jeV4)iUȐZӷfoi {'knC[DO\u&/GO>i/eZ&ʠyJU-ȎVʤ1y9XL r&0 0yL|!Ы;oE`X {ck~[{J{&U&_s+/|7wb+{U{ӯ%[Ntyå2ѴW22kއN3{l+\/s[lJoPVmh2_jVԛy]'-NEcrvAsT%Xn`["Չ5()5 j\Q}%H @Q%"zIdV0+&Њ k!`8kr@.mݧy#ݸmىGHZk&N$ yʼn٫c,0pBn1;i LNdՁեKΩ`W@B_AW6ACA'{N|[߶0\ coYJJ4 3؜R}7uea?DkV'&w s l Z7c߱2& :\t(n JATR:H:_w2Iɼ'\oeJcOe AQ!S B"ׁjм )E$JFc Fm\.>{9!t{Fe&g&LPdpRYko€FgQXSK֤4 *(-πS6ir'OP&7h@h(" Z2gDf´4n;=`~n=QɍSiqי^V9y9N`۰ݶo@>+m?0k(5򊭴d%;ғ ɪ*X!C:Ghp&"hzrY搊B&W dAJ0HFfiWiF=p3Af ,рbӜEe[܀R/Y r|@Hhu.ZtB0hԖPa&cslե9vRBǨ- F vLU>[,]bt#;ZtB.橠vq_ԖQ[=0]#>hZ'ަh,.M(y x.H[ M%!J -FRtF{ %tFʂRV.Y&QAT Nua3saԯuB*0 "6EDՀ"aG5s(ŋ`-d {pUN{J2W%ED}l@K_,J]JřH196[:ؓV:bj9MՑqqtJ{F}qQ7E=​F]- >b>"QgTYG#C \"vSfxG׻1pn?cd.gGg70v(t b|JC(5/;Iq.=nGqrqRfM$Nz\IcI>qbɪ@1Es Sa"hb#K*F5n(-]`lEʡ0qWZP98#B(7suY ;M/O/ z~neڤuK0yN=)d] ]EˎQdɔ}zE0[v K1ScP!(Pj0SCx,OKb6ZkNQJ) '/1U1+Ao7G%_ AڧBjuwփ/,ݢIal@LEd2:#D[\WbQ61X\M`ʂi<= c+ Ґ'0$֒_D))qM| ,*ZeD}NJ+gmuܿ%B:@扑ze?`knOS~0s_fWǵvc +geVhʈn~K s޾W2w*Q[f9H7#]./|xg^yavz;iQ{1ZVv>vZ?-Cl;d߷~qܩڢQaq[j62h}^̮L=M?1.F+:_}wZ؍n.㫡C S0oFFbYmzW1T#o~kt%m˹glx60]5tt"L;k],=.G08[HaR5%6r4/Hrׯ.]ܧ8۴0T1:g"@I`|>wlU:7_~rwBm_l#9J m3hߝoI(~kDKsrU;à$(GxcV-O"[ (:iLq90RUfM(OY'VIH1(4'鲷EF  I86Zodui-ʜCAz\~!k޴ў7ᄊK5̒^绢LX9Ǐs]- &YM3'⫠WNJ7-JZ/*K r:i a %6D#&h8(R[E:$@81&ɏCP1`M 爔,J&7ƅ>UYF+).3nGSGV7dZ]kєƥ^T'σۤO~:jeڻoy|?Ŧk$`^+i&yaaWM(2ܿ>guf+å\$^}G^bö3quWG'Y/h6GQ;b[Y :6!;ѳO?γ1w%yyFthF:G3;)a}Xz#ڛyTHgoZg42WnBI!PwZKi:O0v16Ʊ[d; F H$Q@˖4o|#(\q&^({0v2<0eo9yU}n*{Z">\zrOAɹzkWh>u:}]U/݊-l_?3讘~Tdrs KU.ozIӨ"W ̼_{7gtŨ{G-< ԇ>'_>:J+ț<*ySꍓ7<(l=uҥ0fp6 RTJc.1Yp`$7]=%tyINgolưI*O!cz:$?V􃳹 4 (#?w=|O.1x8):Zr|Bgz"'KƔ.:cBk%b'hN8䋾+໯}g>u&DD,lRmm%.w"5x*5IGK9r|Hgoa>o)]+ց/9v~u˶ɾth]4 -ݙt/viA7ǁE+, /PUe\_Ke)^mT+KU*KR1HY]3o{&u*ݙ$,I8XpaՂkWނ+ZMT0K/s$ppJPHDԂJdBeQc1S1@ 2(B%-g~,9ξ6`ԇ/VhѨϣrBDM \tZI{VmH>GIu  PѐgUEcꁌ@5%;C#֍v0&ݜWV놭=Tf?4ۗf4I,ȤY7 k[iA`sjvA'JΣ2DXC.fhFy9zZ\Mꐗ`U>:ZoDFMLjb+W:Ha%Y|0 X˦`jsgeΨAwHCZJf<$] xkLQ:D$Hy'%!Z3xxG}Nl1yKҴzi5q/עbPT1(6G18Ni*b 7*6br3:*!}Ѹ;srvP㒳pF, 6E&+ )Q,Jo\0~1tIeQ& ZjQ׾Ŕ]Q&JRLvYe>dcl92f|δUx WKXKxeM&LA$K2q(D))dR 36,G 2hhJlM@$Lbm)(ٮ-$P")Fl~0ݕ0ԧU @JZQ("KJ%*ʩ,}⽯AI\zu޺) HC!JYRJ1Jʠcgcwxt`~е7۫u_b3hp;1}9ѕ[hMkK:L@vCϫo9 g=m֦Rɓ njNe<+XʸZToJHlQ d$` y!9"u2HQ'G{D-gOV9᜙ķ:6zz|mwUOo˧!{Lxo[_Ib9b1bY?'ߵtދܺiDS6U"}:0e_,mNz棷OϷs21]Ha'Һnf~{E=f^]nn̷|wpͤg>?cӼjq2*\C^4A!{QIa̋߆p:a@jZnm~"kTu`{D[KzwDZg%C]}#S G|v~u(2`@uJ Œc'QfE<u=n^wFyYx5 . .%DJ›93f ! (]m$X j_<= 85u|_^Qf4&gLAeiT.ދ>yI)z槭{D㲳m˫Ɠ95D$[3͐Fe"^(6B0ҁغ@vVTpX؊nsh.LNl.JZ4Q\N9y^5E݁zQG%dj [4jQm)%:KTczÐ!C`cSdt 6&rR1 AD9bIJkRaj͖GE3 [IƁX(x -9`Xy2 5{oy^66E7>Kt#v0&,T|)%2CuTLUX[ui֢.oC5RVB(*lj6D)z,f&(ZnĆI^11Iǡ-P{`$>iCZ'!l/TBaաrwtS<,4l!33hk2 &J:&T;R,j)5fٍQW\|1"D*6Bπ&reRS6[ :^:9tn<'Wz4@:<1rzWb( 6뻕Z')GGB<Qdߋ\ ,˳Zl}G PЮE=a_d#ī/q ~cQ/qA2<\}R D2I٨j[೎mV0W^Uc8^y[^閻ϓqܘ) Ց Wcv}6W%ˇnJϷ6L6^{x=1jS}ԮvYkڵ|(:}-chjk3ݎK/D(R"w[a*1*&kdKNMr_QazHtCn).عJMQKHI %jGI$Fo1f&.Yg4X2c>-Sex&H̵OW4zh`{@o*+f<_3b4zi=gcqrulյw^"HH&`GYJקؒe#Ym=x4U],SUV}_Gk^1~nGJ\/㎡j-_emA[NţcSʶUXj\L aJ`$|S)bZEFKnT?کX]uzKg2hwN5NTUGf5] h,*;V;JpqNÛ8Y] 1 cDy;ԳF+b[6t:x-]lYZ+\߻V)ؾ.AW]46y.f|iF<- Z;;$tч>d4 70?<KuJMnBcH(DePpo+5_(Y/YIgc>M8o&* Iu[`(*ԣu*ѣZRc]O|WwO"U?C8J_FXHjLmFZ2¥RK%y]M*Bn3]ߛ DX\BVJ$*Itݜ3q8XdztQvjg9TS,KgD9|9Rj$7tz=[Kg5]:rтK%w$wң!=ľbCDX'b7Cf !v3nb7Cf Y,H(V/UVuWגU]5YUJg7U-=8;w… -hv~R\*IUdoo &ԅqy9B)t|ZUS(Yv{(zg8=OE\ 2A{i]m%18CZ{g> bޭe:I Ȫ1*:%a &Ijo3P;fsjmM{Doml~UO~Yd]]-ܽ{WTZcziԷKw{3O鬧<ij$HH(P"I"xEbХv.X(ZcjgU4j_ ފ7t?hӜ]|M=oU޳9IHZUIhl@Q',A hWp߻[bŽ -Q^? ~K /dr KjG-cAI:D74hKmM,R2w(&c!$+ȗ@7%GDr0a!u0T5¯_rNFyjYhX`~2E)yDR+(d,8ѶIF*/ϫr2hN//&'}孪?NAnL;^L*x?զOm=.j/`%c&ynZzI\u$^|9Y=߱bi\M_oQzn3>xhz=ct9KӤFa/Ff{ Đ>AͯCqϼWyMowͷ8n(_hGoW]iJաׄѭעp[@ļΈg( @ yv-.Q T: 9]|}v3(GI7Q$2sӆ|2R߫I{__!2 W=<}ڭѾǟo=ŋV ;m& *2RuykW/kxTjq m|WUf O 3 ?F?I H^hRo~N_mT})3ލ>#Vq|F=)(p[]u7x 70FiB`l #8[%¬TJ8X@& ,Y-iWPH F:Gw}3"׬y[)u͝8{@38fଋ2|{tLzp5j uҟN9^YӖv@Z'BZJ0iV/\ݤLvtR 3 ^ K#0 Հ'-UAR]Z*(-zrVEΨqQP= QPJ@Z3:u\r:28@Jcƀ:$aRP6tLn]S_{q/kKLq?|iHJ7}~^͆z0K bz"9it`۬&dWGXLАqFhRuُ;o/tӓ   ]̽tIN;xhk'L JBQ9} AҟavNld(X!!*6K/I6ɉu[$6IċֺjZI=os?p,| xI9:Ȱ{I)^W}=yטTŨuM&hR(xG2lxcʰzfi6 S7;(-p Znw -Qp]zv=SŢt,h&\(tɅ&L* p?)Yp-v}O#5yUЋ*9cyJ.6kGUЩWٲN,Oimiwе> Eo7YMe﯂}dx\udob߽{79d[Ȳ^WSEQUd}*C7XEVy~~2wyݼ is bCd髣?f2Rv|g3bVT *I,d'JmUPUX3'm" z)SZd6(ZNJ}A0AIs.B1L pwң7Y3fs~^v@_%mv}6g;AT4*8a\=dh CTZ4Oƶ^ZJIPGێ1Kˢ b ȈdU egmu[kq\Ko!UR[Jd("rRAMMU~h]/4glj O?OZ6wW/M~K\DAe217 u 2Y-Q06I$ʨuo'w~nv'Og0/j{-v7'7}N̗%jݸm[ד.\` 8(|lQb؆dLL%sXLev>>=:|Qa01%Aj"K2+oK $rH5Q)& zvp#Cء@ͼ튰s_YE[No~j8 y5U\^KxVW)- o0 ;gW||WSNJX$FZ MT15"f 0!KC)I[.9;Q:jɈ֢I1v`i.DF (Gs 1z@$EmU2WQ s`mB %;JL.x9 m9f<߾I#0zKQimlTaViwz >zOm#]];XGEm:Tu֏n~bTJnxt{ut8'=<_ǟf;!lxdç}~33o|\f#^}=ugvO=!wHժjCcH['~HC{-UBz9 #p:|2' gG((*iW: yn Dhg"Z 84I!QedtVkP˃2W!A:@W$$rac[&cb˜=(a8dchPGE=/2BJ@n]8虯tIjE՗q[<n:6[c 5f``uѺS\3G%ξ<վ\m={{yHPm2*"#1s\)&Yb]B;i  @£(Y؉ӆ=fͽ(x3K r) 7F;.ZNͩb"B >n t]MvܔbtyڅgUv=q{\:e$]+2˜(0m)ŕƘYZc^ą+ cJZ,UW˶*&Rt4a*\eWY\mڂ+[WY5pp)a gBxVjY2i;+D'vy(G{NO?MFEӀ~?#*J矁1S@>}u 8Bh[0eRJa`ZxB2QkUy¸:JZCepuM*NR"\*\ei k:PJI[ĕ>nQ٤olAw}st]dH#&~fn ?Ost~f)Av|Ts&\eh ?`PZ R-QmLXW(.mU*K)H7* vJ* *K{nB)%鬫+ EB9ouŕ-Bi4~?Kɻ+9ʺB!UWYZ, Å ś{D~d#Fq%5^\@0C"J`ڂ,nQR.MbZ+F?so{ojr'x;DEE\E{a|VAu!p4t7x}>VBH d2Ȳ߆r"OB"gzAmlytE3, XX)ZōRb2:\qRUY e] )eR}W4w K>pOSn r U+ X5gY}`Q5Q'ȷv]r6t4kW).aBؔPZh*E2KNʁ '. "c\2m*FtWHL]9C"e`T<j[4rL.VLʴՈnPUQc gs qTPɽ6]޵-u/@{3/JN%Mzu hc'W}eݝ?ɾ6u{?[{ضdM촯{no~)\զ1&x-1YZ M@))Q] [P\so%ڰP;۳V{;$x*O֘PLE= k:ai;\6g$Y0rQK˂*͜(>/!)wDԒDFt/Tv0+'$@83y.JV(FAqh(1QTSٰywqɥBء *oGI LD(t.Lb' ÚjӎSM Sv,]o9D'Fzo*Ť+Hsʄ0@/''I.z%EyT;-(!= Y"AC)2%(|D ꨬ-bܩɤbcSXL?N%"+LD#.g:Ep>۔ RBS&%tkÒ/KD)!RQ&IEyPR k2Jxޣ'@q(_07oC.NM,%r"qq)3 #hLfql9:~q\l iǩ<*JhSm$ċdFDw0%_1U2&xEtp"d]ոCK!c&SARQp;J)¦T CdyY0PKH!W 5A)jprr&e8U s,PYa0g?. ڞw!һ)Z怴l(疨`E \Z#L_9x8-EP<*p2 /Iz:D$ HHB2Mf7CVtymlDT )4R?V`i2ʜd"\;2zt:!O6׭d~(C3p\]$2\#u{ 0`;8d _%)Rˡ앃艹R$ͱ(j(,q=5UOUK=DZoGFy IxedtVkM˃R I#+?pB!ТCf} #Io=NDPYۛ/EBq5:K>zծ"}치4~6~)n?wENn\WޏrArr~=NMVo}dx22u15٤n۾NH: 4FÏ rhMFGbO'mkG *1WvFMq]qA:X͎Ѵbghʶ{F6sjPᗖM pr@lhzՅN:7i !]RqE׸Lēy_Ò囉\2ڙ[m麫MyѶF"R"Ebr&*a!xy!Rh%OICHKDud!4 Wx<"R70!T̙PEKI(*"Tp֏eY%^_\xs Ke*:|a0S}'j2sû-?j3K-saA^S,RDoF*Ĕ9̬D!צNE=o):ғJ;kGgwþҾSe6Jѣr]R ,pn Čj8SB4X|ږB'{]kTMh3?]/QhwP|b- !ր(V\>Vߵ<0|w L?6wqE)Z^{0s&)9m{ $e:g<2rz?r,V@tk{ w ǴdCPR[v6v6}6-#5G<Ʃ221^R:6`,cYN_G@D)8@`X hP 6%d~s mՄ#:Yv _hJD}DoPiVƻͮۜvnĥ{,cuL9G9UPIX.@ՆF#8 ΞDXig@cFƭzI.KSrT>(Tɓ+Fg}r|ҏ{m۞cE $#xQK+M*) 6\\JD(Q2%@qx|QTy 23E؏uؐQ D|TufiN[>Aؿ_jkFFsEd^#DQh#ܠ79Az A`Hn ךs`H0!JHJk\yΥ֛#Cwtߖ< @Ψ\A߄:XwUv\qUg'σۤ"V]vz;n88~*C&Pƶj@)_glz5.Q^m|;^u09 ]~t=ր+&XGW]q7n}b)/Mc8uRos7翻4γ+5 \zv2uߪ;C N.pmG{q/r8Zqz'zd0߭3IUyD~ͽv#0Bn٥gݟU3nȔZh9 Fbfrr~py߷>8Lb1<Ģ43[%X)U*Oos(w*G J֘Dx R%JcJQ%bp2K[AspS-#OJBS*#MYָUHWgQP;P.BIv x# ړ΁L!@"->O2C}Ez'ӺZ\S̬u(ԋM.oy&*~K8wLGRa_PYV E?YyܳTD<ERVsѣVWYE?1t֚m{W3ɋinkijkj(R[w ~1/Z'0_բQhNSk5c+{̨E GklAth7}ݗ^%rõpEUmf#qV #94$,cؐ\b&48WCN77Ho|tu LkOr4CL$`!1GC'H e0B: ~2;H7:M.yi]Niް CMs+Ҫ^fzm-T|!W/վѢ,(Jm-╥lUgpQmkc##U$8-Lje|fr+tB)#Sȼ$KMV;@@xAG#8Dn-Fi%1Hz>E荜mzhl}q}֎L.8TMk_:_2ݪ/iƂ:*i2a*IqՀb2ʇU3yu;`5#ƕ\H[ZH2:kYdae9s2wM4yE3*y47W J.XJKKx3#J&PƄH©IOs)hD:hSwy.W47RX'#4@ؗƙ3p @~cLd7RNv{!++ܒK+pyjGv_pG^(%IQԜ JY&ʵy^EgD/zL'! 2L $&ŏ.f mapeB]*aD jɉIIb81۫soyk&gX5tL!k;H{nh\kSM}x_cXM >Mu[8s1]2TE+>{h,'VrodIf%ĕ:\_.;7$z$ZN+$-+o?$JJSB+(u@BdT9E,F1̮ko ϫeyH<cU8W/ z@AsrBA7p&YY󍟖u=eд,5:1jK  ~*B^HpM]I$g'*P`uT<G~!P;E"B.A55`R RXjqg4Y z#q:q:9|]A㏛w}Pw28B9?tr/oyVgrь p`\PzM,RiJ̍ySިTJ鐸m7mߦof[KY5+(v́AГRq*q:Z=N'S4鼞8>!uD^_*䩨L1ǮJCEר愨RWH0p8u'2VWWJz MJ,Y'^LwŪ7e m(_2~k÷yC4b FyAą`B_OWL'ր~l#J%@^rfʒd>|{֚S23䚓IV9VT26WRWH$2p22SˎgCn78n99#FvT'a5UWOVQWORWl{u҃1DRWH}vO$WSQWZ-]]!A]BuEr Ne^vZ.[;2[Ś79zmc8j)mq\>yRas3Nh\y%'Ε{V0'q]{oH*9 ~? wٹ Y7Վ&x|WMR-ӱ0@bGl5ͪ_WU29۝=a`sBfF{7 ?L?l$- A/>}Е Ac#\ fGKLM"܁P&hڃ:qB9$y๖IZ>_4و-.e[ /A7nޛ # 'SO_kuSKJ%"r9Psp`$FaNe!h`]Ll:,cu2ZTlQφԟDi+IݲA{yTPcT*E؆ /f/eZҕm4N-=$> ZAYJ[ɷY&K)Hm1cjYx%-B 7`, k_N+P21Cj %ՉKޡ*Ҟz#cI&[JgOFG*U>;\rM֒uÄk7_KL ՜YlS7pe5'Z6:En^7/g.+Bγ~0s$u28e=zŁIcgfG~jVG vv~2Ó7.3ot|K?sNR^&iSSf㯆vFВTY+,Rj7Nh!훖`mmskэ >\[q&;Y}J66E[@#B'LdLvNd?MvX9m.DED >HPF`KE+JoO|ף;&wֈtJ$ S\Tޠ$6;ѹTjPQAt[/w>:&Τj%^Ϫs}۰?fr|ԠVh"t2ډ Tc3n-hxjrAw jLcp0#uLֳ UQ*KR}#cothߛGbaS7,<p4gy5:Ŏ"4|v/~=tvJ4TQ]J8PSPB NH[ԕsR(-6)8IisKqGj&=&4w|=#bothU{*r|qɱ{E>​8('96bTD2y yk4~ "T0saoq,Pa=wc/*q_Bя??3K=v1=>OI=S4O(i\i !ȓIyTEu!'Q@ )OSR##)h J Z+H` j"AP5%b{z#cܻPM h)oxX7_۟40@w(,wlﻆ?o!G@ pBL,Xc} `J2*gdm Ej@gA 0KƄ@@1Ar FW/L&ȵ.)I.Vw\=(- Qy̯ILfޱ>}n-2UCq!mٍaWκٓgn}:^h}0?Ə Mʐh8FI̮C !x8ᨳ qh{%H4=4m p14΋HD^\dM"F^&i:"ͻ2xi>vAVXl{5F!?5HUꈥh5Mǣp#1n8bo:2(kb޻IׁP$&U?m{5c ZEEzэQ<= {{ze1ۺ&_5kS!nrDk&b+nSqHۼ=i^6/YsX*TT{:):U ~8Aֱ/;7у2S-b,Pmz$4fۣ{Ob|k3IjҪV- ԩwd8&vfƬ9-j6igSu:V_Mhma8g" c=!ʭ4j+=h8:S H"$HXb[=j,h&4e@Y18Et QW<]6(D3+%r X6&D;ps"S$=$ q^,ofo'U8S?D2_&JĻʣ\;<֋^oWGx9nP64hw\EsX}Mh䗫ªZ ;˛dQ%ޠǿf a:m4;/|Ml>s"v@2\Y"B8K!am0aRB3h%TInsTCcCeĭ+D<|?~{AfMC1f5aR,q\o#TyQl>Cˤo`̴:n^VC)tR=wxa5P{gH{yV Z˭kwm*-ݽ@eB)gT]%9^"m7=XOnh VPK4J8Ov :铫eO5Q_5zJ PBiFάhy[K>A*(N' 7#{Ԉ93\eoW]tښ:Nt|T)Z^XS qKsrcdp>$3䃴DxT5X*7YRepq6y>Lؚ1xPR }鍜Sx>.SKlW+_1q'@6E]uݮ,,Pu^KJ߲>{imb@%/`V{[:APCMvsZYĜu}je5y9'aeEK;ST(x&b֐ꏑ*-&h:{\+ZgQN3ٝW(FaRkπ2F" NU;i-MђwhELmiĕ_Loڔt[^KێjZft7鬧$eL9a$ }HdETgYԷ՚npWEu.g1GEdaR5yCchB0?,lZZ&b#wհj@h\oFT]q3n}e1k!&~MjmޅydUK\eKg<˱RW/P.>ҋ!'Eq ./jt5\rsy fbp"=d[95heaPN_i9-pw%ns0esm(N v`=uU^@| r~mX׌0YVLAջa"?E虫jD߬ɲ{x9^d^0a&l<TjTB5Ȓw5eˑvWLs ^}c|&@͐esqjF>G/Wf9ت~ F ަq,~mUP4jzxCIkś-Rޞ c_&gK$)jpD%' %2 u0XA Z)7e%={Љg~3akeq,ٿB)X]]Ipr&VL )q߷zHIMJْHi$9fLS5oưX2HcּRL`t=#?" &惪n;V|8kQ-vL'|7]3 Xu &xDNU)C[Y؉LNH.PsN):tΧ0@!IY"To}5b: {e-{JHse P"3dM*EŚtb CIb j ۦ?l5k6ہi=Ӳ5gZ=Il&R|#:Pcٳ-NuoT*!;ol$ӡs*.F@)yRoHS6Ԑd(R.XXTCS6:pE";*cmK%(.y)Dc%lvah؍IBĹM;h!}ٍur @֥&: YV[rtێ{هE e ˊ $b,г$2ihp6h@BBRx礓(=ì;x C2jabf%"cM 2XIu<-9 9罯ABcJ]jK1B)LV9 N7rkŘ' MB;a 1\; X )lE*y ۺJg^ ^< %P5_N+im&3b霷C .peR04+y%,Sc{_YD-d:+{U0 RBOT5((2¡p*Pt7 *sEPSDYkc!, UZCo&ݔKzI/ 欙w^fnXs`5`ADԇ@K`C9Yiq EbײZ CT4ą6.DNR6>*#B7-.5)"% 6Nic7HjҁKDJjB^"P9E%G5 j&$1q6=;ǃ&|8c2288_zO>{H_+ֹD(#&^dC6k(:sNl*H61%{{&ғjz/0x,]o۾W%jJʸ6m0'-B`[ \YbP0bp^0^z/} 3SH#Y"ɤHŸJ1x!䓕8DDLLר-/}sf^qC<|ZV7@1bdu` QX:CNC Q4tx! HvF \Ui:t2^ \rxDp[mDW,CqpUnWV8"b%UxU7እT08/t:vp[}Gut3JTnYfJ-t~8VoFߍfӋũ;UYx]ibx]m0t۫h')LF?G$fY.U;>-hm bbv>.{% uy>kku %v޼DžՅYyD\Tu 鲣ړ(h|3iC),Ǻҿ_CęeJ좵$zZ{5pu/i|{+jG\(hઊԱUVC*;ˁ+)IɌoc}r$U-`/ZE5XSa+ ӏZ~,pf:-,R,&ߡs1R \1Q*놫'kW{BŊ=/VIbo3kiÎ۷1Y}@#Wڑ_9a m)uZQ.QQ cA+Zf& Hg_D!=s Itz~uȻ8>Y_MvW\UΏ|{w+ ȷc cE*GI$P1m9r]ܽ`gGv7KWR;e^Wf޲⺠yѧӋwf~Kygc7-eb>Z\g n2ooy]^V2=Tϐ:X lWq>_RK$oCØ,P?–51nV@(-٥P[P:W Bl/҇V@i+, /S0ޓe/}L)L?1$U,!h0*RDӶUFi \) bYDJ Up: 4n+L-'5 هNWg}G trz!7X[b>{o$OZeHIEsD1IFG/rQPhb*JmuiȂ2QZYz) [h%݆Np8~= ӓ[u_ٽ;X\w65q}>n _<ۻuN^kcwT=R74owwt! bc.6cK9Nd|4ΦdzÐ֦]Sd4JJ%QC B̘VS*.s 4 Pf\Ҍp9w:`-Jzm*3:uqo]zy\Z't~<~/9b[D/OB Tӵ hP&B tRZU m{YUoCu|BP߼1l*CvIO D-f.1ֽZs7b$ʹcWԆƨ j 6 31\laD~a"`^n,A QBd(xX!Հ!33d/:L5) &JYh82F>Tg16nfhX6mk;_T , {6$b!e_;Yt,-``P{ 2s|g??!u]pip 9y e} ij8Qvc#|8+<&g!.> 510vKSts)EaLf@AK5aR",ig" xwi3 sCXk#&R_" \kZM2}sd(y O>޿]}4@:8yb8zZ@6mw  39xkd-O!҈"$RշVXѯ7֛bn?ZJ^gwOB6K-@򤯗|g `0Kjھ )קirx1#/Vy;* "k[tR<_ij?_&.ʾF.׏|av׿wuŴ" [3XOP XgXvOIꓭ3lW¼4^$B,J o*[YQbՏ7{77,N+aW O[޴u:ZM\n& ٴ{7}notZKou:3kmFi].6=6Yr$yf<߯ݒ%^[a;Hd_ꠇ,d~% _H %l$j[Y> {u=yw ],|^ ˸߾xmJmb񜆆OM 0q]\Pق#=~z]0(}X*q]|bnl#v|c5{}COz[䑻SOOߠx u10:E@m}7=> ўGOyWwp^&'ޔd E6,wB1?f7 ε;6p3 C VIۆR;M,[@m%mm޶ǬǍ&]<{D~Ãuns<.ᾯý]Ŗc[lTXi-&ArC, jAX8;X'}E;t(.QNR ,@EqocBmx.WpK$D[>QzYC0=K/-.rvC'<;Ǎ4l[bPGÿomDy "$мw-KJ4(gWYw0Gkf{7v4?B{_"z|{ͻ`ql屯 ,XEqGw=Y0M;b,~w fl&PsPyQkӊcO;RRqjVfLMS`) ZB4J8a+}*8mVuՉT CXe5EJAj#&&V'텤K/ZƢ薥Pbp"%3Ku?󖻭P=ftݞ»w]߇AnZWGWݺ _?.?6ǹL'riankW󋰍k;]M{kYFni~߉Lna{o8:b#A77lXlM7wboa\2ߵVl#\JBv݁k<+#)i^]u`"V@ED hKod&`^)&X+X/);hKk pb& UȣRA"锨G#,JroS,w OY+q6d}g3T͐xyAa鉶O]NP5͋ksLd@Qҵʮ7g쳦ɛzG7.R4Fut ~8h|7 Wg\ߟFaG;E#O}XS/)qdu g90}59l7-6jTdG㡓~jMنYٲ|3?"V=fֿ~SiNmr鷽F#o&wQw?F.ϨPf·uCiFގP/z-%uk;\֫bqc&)ibuh&WTE +UVo%O@IAi:.|J*(N' ya^iW^2eczC3vgYBLOɻfD+L+k_Wi"(!࣎L2cX)p37O&Ke;Rus3&'IȹfL>qR p8|m>|b}i)KvEq HBII)Z:0*WBD*ι* OT `p Q9iJWsؕ$%(~G/, l-HHH0uHzhǓEnL:Is1ޯ3p/h*"du/0zNOYɩ9Z9!Do+ 8n@,uI!RJB)iQSg`:u䘍Ԙ`3"T謏 } F@TU_ovǜD7s}na70a85v4uz*wW-oJY5QSO.d*y(e* ӄӆt[Ӏt#H;Ssf/ <1ߝ"VHi4Dl=Z f6$D((4LjD1yVbLpol8{KP5)ZE(2Ӧ'6*y\\ \ѺkݒW?bCoj{Ӛk^BYqC!I+oD4pGu4l"JFs8ݲ(Ƈ:~P)>}0ဘ֒ aZ ,)@&+.Bm |{~*#]@Y|߬a^@D rGUPс)Y ]fru18Q(?"u43ysMډg{赞z:cФd-Zz=xPg_9x3%2_*"8ij%' %2 NzET(Vг[ASpSջ)ZB'NńS%uѣqL1%:Ddn_Ags'擢L&8zɇ7]kC &C V%8HyLYIq]@ jDbJ8+"Wy>!h>brg}̺LZJX_p£ƒNhiND(͍I*!8!!^ :$Ia,J"%-#zj1!15&Qg+\;o钷*Pa{EPG;|NoZȴ^}s$õ6`g\0A)_LU}O&&.AO3! n $1#4o^mhp_jM sIT̄}1p)q\|y{T2;ԇ06axʼ}κ=>zi4 oj;8[`i6!]UWc㓦~,&а>mMa"q0n ~w.fWyb}zU&]C*_RP "@Ԫx RR,rSp@b2L+"$OI sBD5t_ @E@<ۓ2C)1 5Q -2:Ha =$Wmk>P`2Js+aC, A2BV#:qpWÂ3sB̔zCktIYodZ(]]B@D cr&Tୄnb F'( DsCi˵)I"i%Y4]t5xqlu;_fbAUL)ȔɈ&p89 B9{ J\6W eNijp[V ahhW{i'jhp8keĢrȓiW U# VI. Uϼa:|r.Y%vxuU{_j}1SV􌠊eDVeWVBEADRx+HeANc}.RWYNjgPqhT WΉ"US Ad+!i/)2GN`hk-wD ǓӔ͓brF CҊXA,*qy8;,gopl!3gΗ[C m%?@px4p,8HR¥*ԁ%(PBX u`,ԁ:D*ӆBX u`,ԁ:3P u`BX u`,ԁ:PBX u`,VP,ԁ:PBX 8b*n瀓rs34sHAsV#0OxRrYd ƿbZ-f~RR%_kqHBig1EY؁1̮a%lFV${'lu=HYYYQ'#"#N<W\zLYʑD7 kLgR Z˃TPגUD=0?eyzLՒJ zFt<e^ gdD2Ogv+s71zKA[gZeooTœ-}6uܗcfhY;6sYUYSܸFht)#Ί_5Q7 k{zkÇlH~nS! Wl?{>P\uG+F8z\ܿA@= bmye~`؛a<ɔO^o|bQ] }36??G-&_;7O 'W~g+m=^<5Q6I#s0(X̯%oF3n{q6ڕܯ7q!W0ZWTY) lԫŵTVV\{e,2vWTAJUlÎVԹ/(! I^ r~XJBx_lgǎ8;CYל6TRR 8E3ʃqWB m(.K`h%8ZpkMh9t$>ާK/^L⭛cg_b:kb*U%5O0yܛ>ԂؙT_Ѫ3δ2(/dDWCyR%^ %J-)/QJV(/R*_@qQVpz~d<&ڴrܸaRPޕ҉`UŌpWy\ιjfvng9}!#/vfݎIK!% *Di4ZbscycDT2L{MVu5EkQ܎vJ;_;~)~W/< '!OMs/^ȽhKqct?y}\@1I#ʆ+&T{SzΓ0&L$ߋ%L͖f ^)c̫_ΤU_"*K>C )OSR -K}6 s4M)}:|} ~7H?Q_3o! U耐  X""vh 9rAQupDs0tp `8-KA 9OJ

gga?y~v^Sto^t4lJ Xud&91EU]ZBa]n!eYmry'VIWy$ۍYY#2'l#WQ\ {\F63j~O1>!mv߇*ty;Zۛۛ07Pw.o>YK `T9tqzPvS1(};!qe]As,Krj[#w"rOb b##PT[nk:o[WۜM`<jnҶE ~9Զ?{:6xL3сAslNY6 Q!9Our/8VZK%0"=*ޙq0ArQSc@#0Y(.FS8/r9@Ozv(E/S瑉pA%+%r X6&D;ps"S$XB$!. m5WZlnx_WT 1ھ=I^xXuEt9AGpq M)Y* Q+?q?LU[JJ%"j6J@sm`$VVKQcC,dɎ9E?N 8+z%hS(U(Ϡo("njF>sIZxD뉊 ր7e;f.j}z7~]}7cp8ܢle7ͳ zlo{¼Z`ym㠰PjIkjC6R`VکTRJu074S>P2*U [yCE|Tite[YNt^0[RXf6[6oNkbAq^HJa/`, _YP ༷T,NH]r*Ҟbȑ`5թ:vp6Rk[jH7=?2M磶e hS>*n{nwûgumO70bMȝ5yUYj")u'sOUӺ`a[Wa{r>W=?5.@6y0v7?ئ0VDs랎p)6tKo }Ss;Wo^3\f_a+>h!˖[g_6FYvfdtzhjL砀Nb`FxVER%MV4HI;mJ\]~,g r,z6[pD ""Z.O$(0ꥊ"p%^d(q%'$omY":bT0Q{0"h%5INt.Z TT28uYYD[FT H6L-I@.1*/ךP|)q/&+Z u5'=(۰x".q+4: wlƭ`OmT.(55&zi"2:Ř X`:&YtMJi*(%XLXNW)fƉX\ڨʰa !k[:_a6nM A:Htp"h$浨9\EͅD EKRAJ$rgɋ qa+6G3XL'161abԢ$֦tuRGl?%ZP8iaԦjw'V`:A2 MV4OT@UB0@+7$/ IVx.x#@ rU;X#b|L2F5h a1qamϓJ30^ """+CwxDC1׳K*xjBh ѢUpuqcY eJmcSPd8m>_5ߨzv-a)}gcau<}B\ZU?W\[YPUz{S7~y7, ?Tٵ=>FC*t{0x[n6j4rE~}3Re/N6y&s_ &7k)6f%"ѹ`ڪ:uUF)UѠ5C]BCEq%_}8P<%3tmTb,5VsU#+\H]/v/[`Hp6~TZSCҎW=(J&P$[ gjz6 TvDUf-\b .f EeY۰i-aSՋS. ߱k3r>!5}~ 26$)׭:;чٻ9wjBg$>sR;>eɔ:v`~ՒA2mZ:/ɳAP^Ƀ. y}@ / +uR> ,T%؝sY@kxِDwu`e.[|zmo];vsn+ň..qG ntD, oN{sҁ aMo: D}Sx7ʢ%ϣF:ܦ>!w djƔkqd1t)(^׹%]u#=Fu0dt;k%t4B d͵`vleP?nB=PoU+ 'jZ̋s71bP[mz4s>6pVnうMp-o_UR^˰@^4r&8P)hLR O굠]ZHZ:DQ׶PpNzo#W7x!3^9po>pu GSW|+,j :O97Y~~2" $՗.q PPgo?0\fV`,n% @  w$g"ݲß[{/e7Tt41sm#-F+o? ^*}vMxj!_$r(V߃ꟿZ+/ %{S}S|*Ej:Պ֭$Lrq1!梔`ɫ7\^ǯ/zM<]ׯZdqu|[_{{44,Wv,v|і u ~]k7Zem+״}96E&{]Y+Tz=*8$~ =YԂQ- W,Sxx PIty `ƃM{auS-+&]^­] %M&zE<^(!@0Y}>x;;o.c+(GI8m֓˖7ϼ킕V=AEÛ"dyʇǕǡ*3o,;~gi;`EZy4 G-C传Tٯev1c/ga,[k&ͻژ9jTM Mp˥vM/+|+f}o8_X^Bv6UƵA0k3 ܮ lc| c6 /4@ L̦0.V"̪HI(3T ::oOm]/h YYYNo6cXTҒs5o6e0A5d{gs')>xg<㫁. Iu9}/EkcekL'r+Lq%LiD}iDrrkr (,sNx^55]BR*6i|ȥءԮh`Ϫ9uFZ~;u\& Hcv^xH%ɼfL:e.D6aݴ vϠWX=gdzvl}la#[tHz5t:|>nM ͹ઔCFDK R`YXDz\,zEeeÞL@3*jerGQ4v2R2 ,l!D*n }E`A'7T:39&LiZ FvEQ^ޣ0s/S<ύoEzOvM=k' =:f,l`uxu7xU F^4 oRq)ҚZjRb]hvT*TvmnFTrA(ҐEX ,J#h%Hd5u[*D9 $2C6ze}D ED 3 F̸gQuE2Ol|1:M˽ڙ@Y$":D`> De'L JBQ}Yqe_C^RGR2`1lҒ CwQ{E+$@"x/^'X͊GA7sn& u1+!({ $"eBr(: uT>XP˘TŨuU*Vk4Id#V1EC= 4.dEżm-ә;zw܁8(x:o~hz%OfdA4!DH.4Ѡf,{}~#5Ȇޫ U0 s`)gPTj7 M^N[19wPH4v6]>ݧ<5fv~=u'i&ԑI󬻙&]~أ# 9mj\e&oj0yj5}E=$z$ZM*$,O=$*VkU{D+I,d'wi0CDUt['mzMHi YHʢh*,Q')DιTdVF Udǐ3rnD73jH|fʜ9%92}mƚz{b J_J Z gGGRvkK9^iqQi8a\-vPMT24!J׹A_h~!% uT 3oZ]S$%(;$oӭ_lŝrmYD\ I>[4:PDA#d !u :#v:qr_:~M} ;:Z.,}S|EwdЅ=1:BPs٢LlAƺ 6Y-Q06I$ʨu텷]g]t0jӸ}w@ѳI O.>},lQ*6o>]WK2z&HѰ>+F"pY$\E{wTt Q[' ]Ox u﻾ѭΡ[|9t=|b9pn_ϝ_y֟Ƶ.ŖѸ[;n7ve:kf\H%WCR~DžT*=B^zq!} d<A )ր$T"75$d%{^wTHg:/{T?G6 dV[QsHuać5ef>~dJjj`@DS#b6)X,2}nKC)I[.9;Q:j[E!'2c4V8]LxT)@TJ ٻ6UH~0,9HXZ"Z]!EZe[2gS]zp*Ҟ±eh/!l4bOYR7e/oOnzj='w1Kӵ^E:`3fV%m]xgfL)MN܇]X]:'H )x)xww7ڻw9tuf@;[v~6y0v,;iگ=GfFo +\}+vnybCpn\Cl[nmB93OlؚnlqP]Mo&;Y}J6V6E[@#B'Ld:x#੎dAQEӆy hBD{]\L#eFTQd0K=}Le ɧk+JONjH,A;-9pI `mvs:1V'At|. CPCq> ~_T !?WYf.,h$Fh_]% W:FO9n;4AGjGH.O<F}+-6 <-G\^/@8%9%j<*ٲ}x ͦt]dwWl.۝bk^"{>?2NM  29`90)Y1hb!2/wl< / q( *@yR*)KlH > oem!9;yI?ըɇ{pGk︿@?E9Y{k#^&" 2jZx0/ Im&N7ZA:D+ q23/>zzg@/ yB΍GTrN#՚\锉6gQQz-4] OV]d?ߨ<:@gq꥞D6X"PQ%4)E*Ðˣ@$&3H}9uB#wUW(8de>~'Tijw7}>^}0n 2;YUB!Hd$s6I*ϵ.4'8V:-wh 'G 8OY|pӈ( <BN(lwxbքpJߏ-f9jhus@ߐQ.FŇ+f|r>[,O/?yOl~~oq\&JzP{B{JȺaU5P}dPz;݊4_iӤ(?yg6M<=G܀l%9txJ_V%R]7N征V:1wd+(H师ChtEٙqb羱kzkn?C24|`L_;~cG#wc5KiDI=ꞔgrrK~Ntyd/}QmoL~.)W;ocu5otyj;&FЙ:=7odS'&KZf$L^HX 1su&]lJO͞UϗC1V9B!`^RL}T}fuHL˾SgP1(5r2DEpBj*5¤\KZy*Cc sM+Zo-LbFYLG /o׸'Dhqnn}K߮ߦiֶ\bqN#N^{0){{3eu${{׫;h+qkb*=ިtB"&)]%igk]S3`ڪ&u%WTE Z`dx-PO)J( șuM7/r/"Pc5Z)7eS0]Q#:Y ;" ]ZoDW]i}w˻ y1 q,>o dp>$3dt"<=wVTY.gMO&0rfL"8HFȹ_Y_gE jn頻ݦhˬeBҘRvW61WRV T* &~;),b:.,de]3uOx2g"f I2mM(ȵxu4!C&*64LjPHĘ 8*Vv:)ZKw(FΞ MH_$YϔfEzv5ЧLv/O܄ϱ{=rgDwjȽ:yD)'-G$FoP")W"JFs8Gp+Ƈ:~PUۖolK]nHɈ!ALkS0-NPd!% (.h/; * C!4S~+^^Xjk)J%*(;BƔ\Ef SM b181}3 ?ִ y8gZ#H"Ti*M&J ¢7ґ` Az A=QԒܖQ\*D` *nMȁ^!"(IJR\y!1#Z,Pr”?t.bc.OqF5LRUϧs6?5o ;\̚kS}ύM.ۼn5'5! )"Si}ltl=9*" 5r&pY}ގsw|ƶW fe_.g߫ mΦ_怏'qc Ոj<]fGWޗsXf9~i 0|tY3yc&yKsq\cd?`Woq-p6c7-ft5XV&?G`m^-:L2ꊠ 5heaPN%99׊sÉk𓛓z Ll][4)XO] b:m`;h9rC_:V%L SP~?\D1r g><}隭YǷN.^L{1xk UMhYu~9|Nѫx=jY6+T33rT[O"* ΁<&5y٧絥s_8.̌i~?|T#:Os3c\P[bS 7P+WބL{b48E6UB N"UOqsrt$Y QK 7"YꄓB&T)Q=11Hg-.s9Wgggީ4 [H̶)?vgi_Yzm` ̂>_- 87 M7W64Y݀q2&Frn,j˾b=] үY)C#O%-FΝ ʩ<7L1g 'SQxY{zULGEklfKVp SUL?bd|>;gUd~L+^WZ1!CiRA0X8An)uF+,euF;&bИ5vBklkSo:W=l:`P|p+ j_by'68{?~7W8l)«`-Z| sQbk5 7[0 О,y/rƕA1G[>2;F9'=beDH ܳh9LtW `-y NI`z[t~#7mY?i^U/ؠ mn;bOǫz-cv ݹ9D-`.i+f~~T3ͺ\LiC5l yrt/(IJRp Y(X;km"#z6L-ū xK[o̜H\~n 3zޗ+%'o=".nSk}륜˟ ¬@"eږB!)|9=DO'A}i q> 2}BB?%JȝE)JnLEo7S++w)%V^ \(E}M&Nǧs>}Д{[ {\M߽ѻw_u$7*p/i%"ojc+XǓ4DM@cS/*co/p6~_dGHbN᷽D 3R#gW\TƜ_^TnIBd\_NF5N2B6g 1s`h. wʼnv w Pgerj@939zd)8hF\:jϘtd C*r->%5lKܬm`؎1J_m{OkLOvڠݕ4ac% 6c݇*N oAFF z==qЃؑA!+JFc.9_C rfZ23 :dGB̄J@ !THT{8KȒg+T J z2U&ٳ|w|l&'D?=&2xsDٿnoVsr|Q$K"㔱IBI.'T yXeVpV9f]LW#17gsXzKN;;~~K>* zuJiYt2Q^OK.($KM 6 __F۠ϷOPv64uڰ(&S{Wç*ddFa# <&޸h4d6}g8#P`uKQK5gA%h0 H KVYXFppR!8\d3˜;i'fΞ6<|e41G&n5H'w{Gm4|H7,s~cgK*,X0 ow |"fEO?0X5n.nk-b>_U=,x2xKZer%;r˧E[(=I9b) A4)LO֥ybr׷zaꅭO"%#^7V1!CcH,d J'HY٧$D?| N׃X3˾xyr+Ńݗ౓IZY.w1z1诏?TGt`l3Ҵc T@pNwIEG [rUQR8C`Vylʒ 6ZeRvAl &͵+@62V3gոJ=,53N=FcXxht0 U<͙}ŮqA`6m1ɀ'&\YW u21P)kCTVu.1g'dBRFd2v̺,!Ģ%6G-sv#vҊy*]mvڼ2j{m{ (E&~t.)i2fepZLGZ \grc0mYicy 9%š 5G()"9hٍS,x*Xm~2"{D6!8E gc^k@2'鐘yp$y:X+q0.":7F Ek= )D Qs#H#EP%ҫe):͒CqQVEbf<ӊ\ɢ@>bDuL O+V;ZLf6&rcaq(Be<VVYmIp}*;|,]M?~THzY1 &4oиecR \ҶIIz\=R;ɤGvrE*kSCWZX!S(e'L `501g:0Ktvui)^1 ~Nb{W$/@d"GƢ:BI DM`ΪT 4ٹ YBLG B4!h;dN)2cPI29gJ6{&Mzw7WaFL>^!*1(?Ztni #`Q%#@"m"fFل*x ]D xx:L=ly^R#'c )k\&"h *8%57=jA`XŚoAwٯz2C1M]qɳf27NcZA<ԥ)K:M~Ro߰XoS kv_Q82˨"E=AmR-p#+ ^pw!Nøcr'4: UKBHb {@i1Hƿ%nw;5 Wzٿ|3Prb.(ѦtVwt+tG+gSCE7bɤd ]S+ o)~,@߱gy::/߇q|uY.+y&Ek}=cJmSv fo/'R;~Y$!6F7YHnΛƆߝ0tO]߆CJJW% Ԋqñ@M'|$)˽]ۜǻo7eA~Os~v_B!{I_qthp]xdeE†%׃5iۍXbtM+=l!Nݰʊc_#5Fj̏qN9 y5 ePd2 1 0"Zh"G\g'!$_^Pc Gn*%$Z+N8鐣HҠLhRJnBǘP,&ן2chf8?9js+'UϸDzQ*wI}eSUFkuetz[Ea;=;Xs(c/j*Z\U>ӣ(ī 6_ƴ ZκOAZq]>JR͏sZeI=q ]g}˜7זþaQ /]wem$t;j>c{*udQߝ )ABRA"K:ҁ(; IHqhwzk͓G{O疚@ET#:iw3 4w(UEXX`PUPA̯Y+|ՋZ]WIUtFĕ'Qcir=]j:yOryalg.r9xVA<= _w>UwvJ."(g1JSA-^'X0  ld|f,8ƵѶNkSֳf-,67x߮GTVIQZS 9]W4$6IUM}tek۵y9~;|0طEwR컦N,ܫe {#~L{RԂ,KwI+mK*; Ѝݽ@XFHTK}|CڅdzqjGf֥q:>rO:^Z<#h~֥;֥{T+8G10 8OR8/t PZ_>HgtԒJoaaQLz#Q fnMj{Nki.#D7}Q.yxDKԇ̗.s4 aDJh&1dP:N1-d7W9FǹvsFoy6 KBE.]o0q^LNJ4Y- 5iuI 3U`w}у-H^ZK$+9͎9d,[ %l)BV'7 '=m!BAvA_U2G_hW}[6Az"MKqyV#HО_OD_W8m=\3XrЫr-7YX2( e;+1q3R`Γ"fP&ý(q]%nmIK\EE3B]HA4F`}7k8wmw6ZM+Y/[=}>6cv[c; 󕹜K:ۡ+0 S髚 dirqry977ܒxiC#qDi5"eܬ}+e/mo^/]c9.wSi8t-@ {Ѹd: \PǢIEvFЌ?!U_&4-Oo{-jێ]Ҋƃ͕動W 0h,}4yEwwb`j/;&P{D>zdzR$y))),^骅J[SSc"C =jTʂ6ϻϭMmKsߟuˤtL/U\Bd"S_@rFUdSEuON`JA7{UO'/m / lj NB7ş\ZEa$tϺ-zCŹ,+Y{"ڙjɮiCfݰnPGnP=]3MpAA@*y=dcb# &]N(CVcixw|#\rW!Je2{D.K12XFmRN(3dsdɱ={@z=! +ue$-lr=+;}wݯhU譏/~ gDz=_Ӧ|} *Ph 8*Fc+ARđU.&a d Fk4wmZu]bc_f 8NQЀ1J dc9 lG'F@-UncCP N?OϬ![E)Nz&]إE[atmnI*.@gqښHAz)E eԌk 4 ]N9mG|D(Ϣ 1/#1!K&=-Ih- kl,eAӀi#&&XZymv.^,pv"4i4W25Te^U/g%p1>f8|\7.cKctcPCѦ`>k_5'n cFĚ>8UoSE ΕZl:.^<>${zǶ&f$$jdWr~^S5OX0R |bP߈W֊8U'H8q],༗Ƌ^?чWk7*Q3ZzY:xF˻f;C$f[DUy쌿jV_㔆_~^nM"N#pK^ 1L\s2 /lN \ݝ[o5C$;rtLd|C }ȋ#ENsjDl/txZ(7[EsE?/Z%ٳ[x>ZUw?ʐ*)%lÒ _S!yt.+g+m52z4q Z(7u=:rb^31I@􎌶o2cȅAkim&&U9-k ^GSMS8Yѧuin.in[L!ir݌&`s״4g_t̮cDLC[9n#?Xij@lh-k繏dykkAH>`^=[˕ncxnwDu{۝SעLӇD,%^lssT֧QY-c fe}3*pVUhU5UQ 34UXJ7sWj?"1Fc .!:] ke :@Q"AG ,zZp=)iNmP2خOdv!6Ow ?rt^њrI yҁ~"#,=Pw:gO, pHYRq9pI;-xbLOXGd%XdFR|Ҟl:8g8LH30 Zd$xZjoù[K]hWEtҧ(ýJJ*~o)[}Ҡ B=^c͋Hz6bN bY{n`j@\}M̙'_0z9ɉ2{3}҅RP0?]󼚽:;鏹ʓ^dQ YKs^Ճr=,rYVy^#Y5DꠂEW"D}*;U>,+ʠj)e;];r]$|NQwǤ\)$;~HC04iXԷiXFkձC(4,K'wuH`H22=In=YXtE6:RO;~}3 |Q 6ZaθdY@qFʶd`~C!^cyQ,,pG~.C>Rr&s9x'SA t1ކ]oGWrmFCq]:{ ))I,_)96)M;Uh'9D}U{7CSo?NXƣlZ~Wwp~okY&bw>ps9w=V}¬Ni3[IP]&vwz}&@q1YSoBZWO˺a:lwYu-KhY6ڽmEݝ7JZnhQ1wy]s-$7ա2&Xgm[9i+脦ɢMB~rϫ :eͭm:Ёusk ]L}AeG,)Ɍ}%v#phJ*S)?6Q%N J0F*bu0H2<̧z 8둍 %P rᙣ z=T_[9vBR]#n(Ō/RA`'4P+QtAdD\h-sLx!T捈 r.,vGEjK ^Ȍ ^`A Ɯ ! 3 sqƞXԱʆ거#Ldܒ~=98'pR/Jgy"*P.3a13钡UYEy%HաtLn{zn/2~mYz\x?J~s|9;0 cM@hZiu0F~MQ5Qk$R2EA a _d`+2pJ:1FƊ0Y )6@洤' ɻq* SZٸ/ =mV\htG/1)SK[;}y7N@6-3$4 #j̙<܎o>6hYn&k2A2wwm|ιI&MJYg],ɑy!,U ξ#3M4 Z#]0R@bIxmaKU9 ӻcq7\xXu!kR ~hO#VddQbr4t0nhXٗ~W ~ X7fզXi>Rp44YJm+ Yi7o7߅d,=؅nrSˏ7㴢P]\Vk=SojTE!5Xh-yF%&AR鄡"?]vnؿ;;D-c'K|jʔw_f?U~zޫ)<ɫe>U5i*o¤M41N*S>]½+P+Q6.ұXt~z\:) q1W`2u^w:`f>Tѫ3j!t zQR&Y1q)X0_0Re`V*"VO2~":T{]'\-w`r/y={"R—罜^+k)L@yM>jb w MX5u[9x38;9:Hv>3 ³0E7jS =g@vnBɹ`1X9bF{ɱ$QK-9+R 3Wl6(6Ci/`k^ dIuϴ|5*CB#68R$#/g)Oc.*5Ov@LS~QJj.jr>𯻁݆b0JPyxCuwu^o$W8ߺ-}6oaeX}=N@_@?|6<3UYP-ӆctEz"WTL蘌a&N5OM.RqXtʿWd*KE7ˀY,CL#@)<2^z2U+ctuH g ZGҠEd*ƌ!90`OLj9+j$(gr3ڳTI_ ]e2>[ϓ| yX$fږ'|/hGQ#$ZXϚ_dZIޓF*2VmY3YYm0a44 i)nF.g_p~OOVR;FFZROEE@(mp&Z!g.G%]֒ٷl?TRĄ?8,:Iƒgw([tUlo[m?IO˷3!-qJn1ˢ M~]E,hf 5{ZԈI ⻟~(8^HN y,yX|}~Nj:cJd5LsuV~u}QF4PI)NJšUK^Xȣ+\ό>/-!΢ 8S [d TK"ʛ$/0{x! [35*nmV=Dw#ϝ"p{_/\dR bJ/*m repKO8Qlfb"1o ӘG"h@Q]glrDŔ!YO lɼ{o>FHxv['wm)BzEof3 G w;"-8뢋?XY _ y$_'ɗq)kG(>ر5r_t}.w}im);Y{zqkP ;S[sq:~QkE N`0 &uD%Ԗ $Xts˜,[8ȫ0-?yMҊuОhGV|>E(X_Uj?b;^ l̹O/MGxvqYcўE>e2Prt.=\0][3Nbr" `6Lsi>LUgbo,""ž5#Rb>u0>QUphZ>>̋e7ẇ22g5U??_k~cWW@c{RQo>G[,*IA|#ٚf}6˨ ~8]!)4Z{秵}c\\mJ}w.EZY -o!nr"h*kK� J,&xcp=}I.I_(!9.1k-bm(bJEU?I͉bc~`liF#쇥tԴͥM$/!Y*6KP-mQRR(Ovh]Vd'/CS ThF I݊-Šc0ɺhZCtDf (X.fL.IiR GE5S4:I(*)V!ZH-t!n XɌalJYll&ۀ)J#I>bgk--HݿGji, CRBTC!Yi};~"F6**RPaf]|Kzr@cVKk޵,lM~%|m==2`b!0b@~OMF-[!!S?,-Ec}J:'erZ!>tFVcm $GY0ڢE ׂLQ 3mo ~J{JC.0(JGAEX.@1x7e Bf"]2U+V#I}lH`!gܣh`;I)XPPGBFtT)'L+|yh,*+zZFk(`B]A>`, ጨ2[r`5d!6HC#itV"˄:}n*At4XYt0w@]E0* H4pvfS.B% ` .e ` I&- p]BP LF0J892jhP5HG,E@<@7/:*<A6 9GhF'e] %ї5)-i̼T} S@!ѿ(Z%J̣ q 2Z* ȁ70R=(*(_CXkuڄD:[g-iO!C5 (N+S;L')! /)0,rn]v&^XyrtЂi[2f=f3Aw ܦLkP\i:@ɁhKQr2ÝHoQ,j8/4XH脼 jITx"2D&jZ!T^1 &Պ.Kc4-=G Arѡ^F=حνQx3(nCe6 ⥛s2YЩ ֏D>^Հ\ż3rm,9tŸ́$ȝLVhja]GΣN*q>e[}%0-VXD(;Kb f R;ЋIuD! p9}%_.$0%LE)e`7PJNZ%LEK I(Di:ࡃ,B 9^@|6ٕ;,u惙3`0bdu?l\@0H)YE~]`QmRgR)}J6ArQ+bycUjQ`&yH416ȉl\j%`eC:iϢ;k4Q8FJr[6i(RIQ{)@E8F 7ܭE @T ~ M{-{D0dJ-6mM:w/[VnfrxL{n%ڮL Q7]@L3JG+f= kpHS:n{Pp(r2(uTtk̪?k>Hr8%E. dL B9YGn|4Lݷ1=U({8n𠗨5ʡM!Ts ^!7fh-<7NO:)Qr]gT*"H` H H**mz x""0fA= i^æP@>o7ĊPDRe 4IUU;*亣Ioz*@=ԟP(eQRSd f9 U2X:W Qtj#ʅ0AT48gXkgΘI8SO 4֤U T)JK`KU3d$2r$jd-ZxZ-Q秽[io#>7S@P F}H46 =<pa VCIH ΀z"2\H O8%0ƇJF9T5] =FAXe(TtqH*}ka=z'(Jv Dwt|Jn7.@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; @tO'U|T;0+;u_ thWr+JМ{8@v$ N/ 9~>%@@װ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v=A'P'bNN l|@d8x@ҲI:lN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'q}>~&qg/7}io/?,hw/| 2ʸd 3:ZlKֈGo\BGBqWl\6ByUWLn>>KlJM5D ipJ9ef)rp}vӓ@I$?}BzWn{;MeH/utϖI/~ry3dw95Y R#i1*!Tktup.ظY]_Y7Ϯݛ5RM_pZou+{4 {w|]6[nQ`ڿdM[nQCxO<;L~WB|Nro*WDtpF/ Yy4dDt]>:şYU\Yzw(Yi$,~'go.|;T;wvr?T4砙o/iu=kNk3mvg7NYvI ~ٸX0 SC$cgj=J;bѲ^ᙵ_|hnGΞ,=Z<޷puvXBOzHvkAo18ϘKҞ d_}V+;߭C;Ő틳)inz9p?{~uqiԏP\<<.j=ۭՂ+T~996{]׋`/^Z~M"jLX*?W0!(a4} -wo_.!s'{Wȍ܇=`ZbEr{`lAױvdIdO<+^Hd'cbSbf->f3kuDʏoYI`[# cC*rV'"Cd\hg$F ^x䪥Oi7OGL'Q)\ﮈS=1]o:&#%1-r-ʙ$!di4#qđC/1)_NXNB8M}{4i~:IJٴ8DWz ^4TB+W l:\땥rtUؘGzUURvpup}Ah\JUWJ6LvpupAJZW(큫,n Bi54]e);:KR-+1 h \eij:\e)MWW"}kUURt+W"o\B]/5JYJiLWWQEp:Y4]ҌM+HW WR~訂|1@_}p #]'xYvN)vLYG+a@/_.ސխI4LL  z'Ը "~JuQ5i UҲ{YJEG·qAuOpJUk(\e)+:J6-3 g+WBiu0dЀsO8(.#ҲƳ,\!\)Bq)i2cUws+ݫϰ!eC}'(ܫrB1MNV@$i LgqUk,kr?IJU%Lk+:~J)Rtk;>"wLvϵӴR'\C;mщk'vc1#NU8Vj|o'زPDL* (ހ@8C*M 9g:5FKIko]"l-|dzXx=Yq@|!3P l@) ,i9JbcLD"%tAHpGJo_Yy\V?+؊nFKT Kpb-?!s,Lp澽~{tԔ:JU*s\U|]^l۹;fIF)/yqRWLXm:|᦯|0~N:&?HE*-7rB'&# iHEbo T+SRF%Béx㫜}g/V9F J TiWIi@HpM@)hd"[>hT{, c?dOXwIA4,H/&dLDsaR1W4 -U?{t!9f./"T)DlFiEx3z yn1tD!8T~^kHf\E'Q`GC*8KFhya% F~3p녬C,ۛYҧUU2=ͧB}d<-8I յ'o6+Zl9]f']l>GFkXW$̦#WJBQ ާxSzzi 0q^_S׵\[ҵ[/3tjC.mewƀ+&ضF PE?&a i uѻd9?2-f^ޯ}s\㩷b9ġg5 O-Cv=A {adǽ~C~8Dw(>:aU/ᮒPEnAS ?*0߷Etbe?Ml-I#yzBD_\o!_֪KƉ8Y(a;ڀfZz8=TQQ㷛*;ޫ"w<_voQPHBhT205e }\ٴZ^`3: + PهꮪfFMЯjv=|PUeZD  XHE)9It`s FVZAvj$wcSX6x+q>+1^Q2e/⤷g(Yfݧs@, >{w?4ii2<9h *yKx.َ;С}l|Bk]tm1[m~=zaGp xV1űqB_?ߏ9vݖmBVv\hKux+1-DźVzŪfPń x6pI/xh pRY:^٧Q6@ߥd'/ZGV!uTN D0<);Och1# k5bPTygr"BN@hHJa)ڈ@<‰`}Ay؈kQiƜ)Pqn'"7Eҽhu0mv}2v҅ #}W,Q@nm l\hԹޯsa峍)*I6|QU wT7hTw$, KXH$)jzLp(iZ=]O#bK$OSIol}:e7$Qk"tDs)h@fZ8 ;M#:ű?$2gB))@9&-GD0K~ڎZꑔĺ5V]tK #4XP1%XcO-,yr= nI-ViA'sG*#tK@4 M`,}5KyI)mWM#y6FK ,:'!4 Y%Eq8I"wɘ:kͣu@"/{ֶ`P//HʾlspB!+˹_q(X Ή(\Զp tb@<~|X{VZI60K8RhYP }Ւ哒pb˻lQg 5io>P<)$K2նd`Uk;vJo]?/;͖Y}EZ]Q>:*a7Z[ g\l8G7%z%xVJT[E'W,aA &NڴT1"c᪐ $4U95s#B%toNjjR#b$8Er^KrĂOy%g>;2{@bkYhT.] Ǩ-$CFuq׍ )CU轐s+w/AH9NUm$ ]{KK䭁}x 0`cge")RC[$VQE*)V03=dTeɈȸ0s !_v/8uűGr~JhN x|ފ̂}x4#R8j9\P<&DCs%.+oTV* B)aټA|Ӭ :C/QufGq͙8{ə}3TahAa(eJΨJΨKɔope?_6"YM+d|G/+ *TV2y3olt*VaM[i6i{8C^:YQis"]Zu"n7W%8iJF &%0DŽ,s#SM.:㗙q :}CZ$lZqx\(#:ֹABHZ'%2LPzDV'+5"V h68 1yxNvlíTxlQ:TgR$R+O"d'JĊh+`*ǝ [e0(KږKm:ߖ(f6%/Z;JLkr (q GOhSQ4A P )K$)P5ܢ'cpS18-4.WbCrRj='c-6y /z~_,]Zd5gV%cSԳjN|k5Ƚu@Juo")`ǪRPLE= kk?`_ELv.c=D-- NZ4s&Z:\LB*BF͐ZўG\"AE)2%*Qօ$hTGema>,u>\J7ˆXFd 80>gEp?۔ RBS&&tԕ\+,#M 685I*s y5D%U/%YۣfEDh_p}@*x ^ c !LJ'ݽPa3E?jl֣Xқ.e})*Rok^NyMA(=:Jx:R/ 4[q2J(G-ٓ8h&F{;1ڤsi k6穅x/h & yw"֪pqL>gkvDd8Ya39P6c2vQ{mO-UBz9 RJV`r+eN2DQPTҡh,7 %q\9zH$zC6wA9MpHTZ3T_'6jѰg%  e7vclT8~P 4n0 4څŜҙsXkM,DJW""u994~\Bۣu$-࿏pѿ~W; uԭNZ[K u?nHM^ٸp /'EŽ-*ΌCO憣׎5L$s݁I{*h 91SE{E}C燎s5KiDbTR~YAblGS,1in=;": iUt_笄ZBe:>y{^պCMmJ컣 78n~>Zu=Đ,MU j@Ed8&ۼ[Ťn;M޲Ife[e˰ Jx2OͱmN4n;;h.{=cVMnbr%&ܡ̼@TK.Zq)8_t,BBd/x],K r) 7F;]q$SD40A|0maM/])QL6^ ^yqEgR?i(pBPCgF?MIuWuJ <S݄@ʴHr*Rmr]6CĜzH~!eu+T6o|F:&mc^yԙ%TL%(ath\%Zj"И@tJ%4BhShT1)ؽ%1%6>]LŠ]Mf8-q\>"Lq0]FƗ>ikt0m2%+ ?،lvr6~uOf=QӇK90ed@Z񒭴^)qm :.eseqLwZ|(1zJ(D)M7 Pm7Jz)wE˦`E!yt+5E+`]kiMohg;'їm]ZnyNܸwL(}#ʔ[YW$ѻL:{57T$52J1j]o5EL'7Utߓbp^/Y`gejzW]2P~٦XWn+LLAgZtk/ՠՠ3JFjX kC]!`Do*g"/tr:]eA^O6p"CW&}+DK:]e tut%){DWX.7}Vv5PF@WHW#iJBޡ5X)pYo|W-t(໺D҄ BUz]eBv2J5.IYٺTe9}fᄑX42 W( kY\l]GV=vo;` O ?rBQVFSyi<^Uc|7BUV[c(? WQdAAZF*^Ò? .f6+:!սBt^o(٠7|A\z}E(aEtu\Ny0LNPyet:v)RV@{CW0}vB.]OU|Y絫tutũ7thWVSu( =+|2ZENWJ0E]!`JIo*e/t u(Huˡ+ P#ʀUn3\CBWVkW喂=Е*!6])wў:ztQ!JsMu|W]e7t^9 tutelz QC1Ь1R }7i`??0ENޒuKd6-Z1.I|Lg?e?W8 r`Ȋ-w oٖk3$(H9wX#+cCrSHgMbKK+!$HP-&5l KlӲ= H9$*$TF ̊hXq`B,9Hn2_eõL˥7WIZ0gI? CB+2hC;V@Wfc@zDWὡ+ 2s.Hio R*tQ5ҕDq#BUI_*=WPnR=+̡?thWh絫k] ]in:4ĈO(2!o :ๅfph:Vu^(@IyYJqF $҄8.o/FR}7pv+Qv-h8???i\)>`Iԙ0Qe]@WX qAz@W,=Uh#BTUx_ 2y(;Q4S>ko\9r`2Ұ@`d,vċױJLVKnIG>>t$Os8Fu~a!p$W.\_ W?/Ї8{n nzâIz7x{?G//]1Ku}7/[ ||cB>tEF'#}/4zp@﯏o׷}"#A7/{ytO}Nj84n[*DiOG&^&62SȀn>1om꿽8 B\ȼ\A0qp%rWPkiگ:@\9*abEWKz"DKǕ q#$.fc\7L1(j׮(qL|?r:"KOaghץuS-a)w‘=G^2ޝr'/%ߍ|ZcO\"T\m֧:"8{>LgǗVrv'QF O_O%Gz:AEu)I1W^Iqi Yap%ry+Qv%*^qu NF+f\+ڮ:@\9.ލS Fx\Ja4q߇'uijyOWMSV +:Ĭ +~۰ͅ+Ct\A^Zq,xX+qW/·+Qkq%*_qu2Z X8 r;;MJTzq60w*&㮠q%*YquW1+f TZW\ v(wn'@N _JOvʻ`HkWl85MOR\񊫧=FywR\A< DnQpNJTYquN/J:7 6j^:D%W+9;WlLW"YJԆ *y-Wŕ r0 F+j!GC} xw򚣛|{ӳ3i1Hts'4ms ].on_x__~}{uԿDӾ UY?}0>a?h6#=-}INu|K(HW(u ?]I#cKC%m\05Y:ˑQQ6݅DP2B7N[IϱA|He 7Zv@]\r99"uW#гn*d>eKIYdVL`'}GSf ͨ't!bJU!iO9W]{U]*O; :6h%~Zhn\}" TĘHz`v6U&rmN;r t`7ZZj  9PlbLF3jLzNѤhYjѵhxj oavR֖eCn\u ɚ 'eG!ՈSSLj92T#b! YLf c"{ B%c_ ၈&3G{Vw4G@R+y{&ʰ6&]J'M9w 0$xB. YeMK>whEa Kx( FlTtY*MVTHk^y:fDy%cNօ9 ~ ԜN $W-wvZ8RIIT1ڪTwrҊJ $9* !֪Y?Ќ3lm6m'r05;b>#3k^,XU!kՕkʧMpS7B 37%bfC)N 0kϒB4BF( ٥fiLZ%|,/V+E)>#(P)-h t[2  VjXq21l`WP&lI^ XG!*(5 7Ep TˊΦNd d0:zbTV uWh%Wˀq3F̷6( iNFBYp6{4v&׹;f[*C(]gԭ9k9 Y' sG`a.nKPI!0^u>JATZ|7ud&;9Xi TtgJQ"Hq;ŢA(xgYy;"JPw/u$@!N9 7ՕHޫ]D5=))"FZ.==#̼BJư_Μ7@eEPF)6 "b $+auh#ǻ =sAvԙyr)ujz1#.UUQV:G$''c UkDBL&>aV;u&8.gﯮ,kwr+P ޹n\-ޚ^6uH} x.$ >:%:ppi7gM*)ҕjhJUaİ'8$;w]9Q): q%T MIJw 7t鵈 +-f]bYL[vVhň޲3AKgy@΀@ "25;9yU؍-F#tfLML(Y(ͮˀaJQq@ơ"bguE ת{ĢaVBBqYublSNX  s=bIVHgYtgOGFЀʬ$1-JkNVUQExworVe"A `A+C%BCj%AM.1ƹG<. \Axsmc$YƮ Fztqujf=)SZ[Ńq'Jg'UEŨն[Sr5H<RVO ]j313Hyy׻GFf'+YAIEa9)lz'3Uw_[:)Jjd*A3J`(HȺČ,m= Ȏ%pz{ZT}f=) *TO7 ZW4#RNnc~?"oQ^1 fpR" u#pzށus-й ᧠G*Fe]*cjdۦፍwܬxS#5wmH{7mE`nf&I&I,yղ"ZomG-[QM0N,*v=콮zYJ"3lyPI3D L5Fi\i O@ײ oY#9݃rU@ M' TJpzʁYkePX 3A8 dBg WυXon+dS:(42e#Эg~ Z6{ِB%pUYfy l2X2 4+ėG(I "vK{QK1lHJ0] N%g,0B3;&jHDWj-'<pR̋\Y_lT gY;pZAeirv ۷!fPRcUxkb%8&u"_/a mU9.w=rZ22]uW ̦qqq7 );rwCn"koV}x{Wo[0a\M&zs9YÔaf'+SXͲAAMgE=' k:0Y5Fu<VOA&C9x.-tγU M[dz^*) ~ p;|oO8ilNߐxV h9x9\J1 HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$"ճ 6uHp;#B_ bO\Z 7\Jqn HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$"ճ `A4K+dF5yWIpu+}9 HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$"ճ ,> λ#;op("1 ;}5uEh8t}$}qn DVw ]!Zg%DWGHW=%`+ og]Z/^}o1דc+ǥCt ]3 ]!]NWpc+-[}kE}o \`b#ckkVpX1=rd?{{l^4Ao`p55(8~|np45ޫ8Oܻ>&z9N'` }֦{co2 EnDc thm99%㫣^qD2Ë mONb6|Sc}a:+Bo RUT;T) GЗ1QbT=].z26/?*vW8]*UE}q>'ME݊Vi*>TfYA ?brFYWY@\U.= :)(QQ<#-:`]g"9۝zC}3~Mx /Xo}S7Nh~,P*t;\(DWVˮu QzKtut%u@jg  :]!JBv++VDWVC+D :RR :DWزu+th=t(HWxb+<;uǻB3OW8#+` CB3[d _jd]`:CW=NWR]]9%R 0~3ߪ٥7çiEwL5#Jk=3\>r&t"q1/Z[HuCh- L!U!VĨ{Qs̺.`+pMgO@PFѷ~9aZ _v,K'@WV־й&<|m]=N. 틮W]+@]`CW:]!JLv0gݡ++:]!Zu 4JVS m:CWve "J!Rtɻ•+th>tBc+c]+4 ]!ܗ>e7`t(5':F"m!JV;:]!JE1ҕS+uּE&=Ph'a_вCK{v!\)Bӈϊ~:M#J _ԳSyQyinm;/zMe,ֶXb;֡]FٲNFp2BDzϵ8alS⹼H .INEW@8~"Uaϖվ_޽nn=h>*ٲp*B1k,lJ%2e,{9y]ӿ#(rޔBwwuדכɯ8?. ^03mE }Ýׯ`ArNo7/G_Ww+7v7˼n{I|kO[#ZӪw9nXo1/NF<}lNlfɅ: 6ms;>4VkͯMe=V߽iݴ鰘 })tHI&#B9ceѥ3*V)YF+ʕLyqqom;SCxQ\V> |btX++]rd^*a*. Q_I;[k_,b2}ѳrZ_㏟u(G7xx«0C7=Cˑb bX@) bA~.F7kCf,*deL2f*j] /m}J 1³0Œ^LLRdf-eeL3CNkuQ/1p*ibJ%0TNx `ޕܖ\UP.Mn5Y &iFdέ#v7Y` ]}j΋C߻y[c'T)#Hq dz_Gm)5Mrg;^ݷG9_D·Ba8r^t?qVwž\$&<~,?<$a[s>/Uu@[!/O+=ԳvyK}]yZҢLq9gƄͿcjpΞ"l0w Z=u,|-]tx`3CaQENE{~OeP&=l.ju*0;n 7lv >v2h;5[3/DO&{`FU*.%"1򔅲"k6:#Rhe/eo/ChOgpU@Oq'\Wj[kuõgGE4:%$wKyI,dSl+NaNr+t bp?Ӣγˋ'}׭ƍmmۮG:7]6P/p z0gk3wf.B ar;=ᄽ'69 ̓Qm)/x:閗hH\sG#=+Yf+'\)X9},Bʝ]m7ݴn曳:Qj̽ ?>qop=r{ݼ:9䶡x~1bߔXwvs"}yy2n|8OԝK)T/R9mZoO{p )fu.͡PAe,XL Gk9Kϳ ~x'db<2cuRcHoǃ*jsk>E#YΖJsVR X"bHkwmc s[h[[`1$7d_ߡ^l+YBDzsזxΙ>pg&fgJ$J" 2J&9\m8{qY]~lbvu7e%2]틒m) 'Q缨ݞ/ wA4YxS_^\oo;k -z+j@hw]ka6p=,X_Iֻv?=03=\_^>pջnvEmwsy'^o˻ h.Gm6}w6Yџ^ge\Fn\VVնaOmثNMNmuEqvBn #ؚB8 R(Đj:T&Y^C6g?nuK{/ֿ~6_Pn W򙘥SzMJnŽQ6 opt3O's! ?yXXm]u~zPU̓d. ue1!Q2 +"I=jzqQ&M#M#Y-@URp&Y L&"dvd&6lM}p*X ~> RcUHB1$%eEgj*^c[E\>&efV9[a 4uL*ؒx?:cY1c!cycg[] fīτWep 5`  7\Wquy 7ʨK?BЏ >zP`NWa)wŪ@XS$5Cαp T  >hg-ٗH1kNK5jE!ȼC\|IH1 :*9JKؠ ZbZhRe$N kT|Z*[S ghzDX+&6<Ȕ"w~9ڶg+;_tcsF$)jckGFw8V d7,'S| C|6{*6d\ZfA{̾ j\[x&qv6N<=Ո9_嵥d2:gKGH'3 ( ˮRklbth\ 4G~ȣiGJb@-84{űcQŏWK*4x_ptuCSRjچ$TL d3vZ-y.Ϳ_a{e@䏥;ir_LJdʓ:L.ZN6LZ z_fT;+/'yަɋ+x.:Oܬ״Ґl"2ܼ8ϛ>wl4t⌷v&O"~w-N.b\%zS].$A-M(?=;e& &m!uNo>5=ݚ`7ݐ$Z}ۓrNY:oo[*旖P2ף oQi;# t( oԛ3tLox{7M}B5DĬ<\aηPʍ<Ͽ.M'b67 pn/6T1?be׺ɾ}&f{4e5/viހbU_+7<[txjQy߮D.{[HWb$0cxTĢ&-VȚhs+]h|e͂֘e?m_'%\O c;ryY~Z~;x?9xÔO***P8*, !KTb,j͵R.8&MƍGБ5y_lIbd#aОCleZy)e?&$Ɗ*VhVB:%[pvcDÝJ4[ tTt^,6ɣ`Ui >FL3}w)jX ]ep5 < v6Շ=D5 4f?M3h5i(@cO@KZQk) & MM1N:*9y4eJemtUZ[-;!>Ϸ)9\}8Rmew9˜[F\5Dm_hpT/Fgb/Egcm0:l%>譼 nyhB%M1CV+DtϤ:h8u0*nY [(jU),l4EW#8LV!4M9E}f!UYyur9Rq1a\@ dr; . 5ݤzE.'\,rm`[IlP$cbFGE|'fEp(fTg2zxJBo)i&mR^iĵ*ЊbQh& eu6 6*Yև!TM&oP% D5j.TmO2u<دUwg1mRNڤX,{ABn`$,kmT2#9 )'u#km-`D~k?^@֍:dиz<XɚW i.`٬ )Q!q!9Bcؽ둃w(6 )][o9+_3erx3/.aw ^mudI-)q=,-ɲ\U1H'vQ!ws'*JI!Y-($.G=tl9z^Y= $:5†>dD՛[&.pu^.ZA[7Ž^ɜh골,lx".U‚7bZ ,q'/''gLDOD%)֑=ǔTѠ ?[81X<;+$I&T0{vyqr%:҄d&c-p/z?ߒ{%y,uG}OukV?,-֚(noЭHA{WY`czW(/pm:RRzp%P`to*kz4jvpllW2*Gp%CqbkqM_*Kˎ>1Kي,pvJQ0B]eq_lIpGWYJ][+MB5UE3t< p`RWoZיm(./ҍ*W}&]pd0 L3 翿`>]1Њrg\5QO>nDMY#TKr9^Ɯ Z/?h u0A&) & !KW"D']Ii9ׯԋt_o齝iiNQWٵ ,`:.?q^;8IT=b'zտoirBq(jπuq8ƍn;f~?sVVbvx*+kYU i/l' ToNe;Rvv^~}q>Kz.\=NZJp()y˷(J?tC_=c~ljKi_*KUH7W9$O~7p*r*KI zpŹ28DUWJK ;\e))pJ(bW9iģP_r:JF%XbDi9ѵ]ڮEu{;ձʛv2O/GgUdi;5GYtuwQ)>_;KyU eڋ0OG}}qO?_l vGUrpo 'Ó8i(<)C-&<_@!2]S3pUE'XQ-J~u|Ʒ?O_y]G2+}q+uF?H428-? -߰*NVZh?ϳuqk^lwZWKYi'/ Wl;w;$i+jY\fղ;WRjQj}2-Q`7p1-QZAUہ+dpژ&ؙ/ZZU"e΢n+}er~mN.AJC*=Tr'qx\&Cz_=Ȋ׉t:?wgJܖ@^8A9;gب_-,l5?٥'Dy~P⩎q`BXe"UJ0%1i "%P8ح)JYN?Ip4 4{Vzdzy^^Wzy^^RzyezyWWzy^^Yk^^wdvW;K;KiJ[ߖȖKsזڭ=d]6[LxD VE#B1c+T{!Kw!O%( U`\np b"FE<38 gRXfqT9* Y@fLm6y'7)c#y.v/_ZDWvþ@LR G-G ',Ri6 \ybRy1J@mM}3ܬ&0xC[,}~[c^8Ftd1_H$UKy1tqD%*@XYT0|2꠱X1#>lvc >4G|h: >|b_B#X@b+QHH$2% 2ΰ1'ʥ\'r->&?|IGiQ#ŗf^KwƼ޲!|Z}Q(+QLo%bneфTJaW߾>~"7cʴSV 5F|i 힀;n A3L Y+Lsc>_Cw"A 1CNE &B<,:% *{1@d &dHfEyD[tL@;gĹͧ_l 98=lgz xgxx;7Mq:VF 1B!nRBryVqQYn:if]M8t1W7/spKM[ j޶YjCu?خc* F|STcBjxٸhfT,A˹ #Pck!E4 (S`Sxn67dj|mg]Rt;/m_L'fl >diy*39:lzǞKxL|տ_FaՊrU65 u}pm"n}.R4~\]`yKr-2iwwm~n~|L&ߢYnx[6y3M;Li3my-aÅ$\7ZZl4imLvnRB]#JTU=Q"VLE[S9|E\*㕄Bި.&Gmb39uJ^(-vkc"F%'h@P "c\9F)&ruPHY"!(O)TɎ ;g6~&k*Σ8pw}u0]ږ[:Hnse7Tfمz:[}mBm1c|Lvw{()s1غsjzw8^}<<ݳYw-+lY5ں^7 ݣV0Nw~;͘$.&u:n?މ1x;ޑZ5_77U&MV_~)"]" :dmlsA(n܆hWq&; ip . IT@9'-&3ٹ861{{ZZViLDu91RKuB\qII1ٻ6ٷ@1qs$7$o= MFa%$j5 Q`6H{WV S$hL>DB 2@T.#Q$"!X> 5PDpJ8ݕ8/x>q~ s ѓ[dӝ=嬾xrO#86NN&VI+ T%B͸1\u zrQ$7Tα9M@)C^{-THƱ`c)U1H*TMR]#cgQͲ3vba3,<4n=mSf[W<^|]_5x<5% @kQ*B5 @DBDBn pg#9YM.`LP m;s5d,F&]g%JM'eb jw6iA`wi9KOUI'4YWW? \ $TwIHeӂZ )yT5!$Ji/8"F/ IѨڎ3qڨ_e`<DL?ED1"wi<8#P,٦d"7Zp 8Zin+Qk׆%-"Ms@8 $(VSeTGKVQy;gܦ_C\\=鬳3-9yǸ .\ܥD 4u8ڈ>/%Y\ڣfEDhp}@*\|\<3鈵R^H;"d2*9cT_1n;qhW{^{cOTx \ "he?()[!3\%cWD'>ȂςY5.P5!c&KARQp;J)& B4!td&3W`3g.WyL_쑶cz['%ʹ%/[^N9@&@GN&qe>ItRB(DaC#x*PΗ@V)!ϡ\R%Ӏ4:`@X9DN8=GAQIKΊqޟI\ p _9zH$zC6wA9MpHTZ3̕^IP IUݪ9X/~bM-}G!z)ڲ\c_ȎR p~I8ځ2㽡SOT#V)}څa,qIȸ^`0~X4ީa2n#dJ4b9XkM,DJW"lho{{~oWogzih.$4a 7i9EҢ ݟ/ Olhdnhyɝkqg96pB'5]?!p4Ie/j2fyfY)cdƕqiΧ-bmb 1y#ޓ6q\We@+HZ;qݪZU_C1E3.}3 9G~H/h@A[_H v, #M;wbLm:l3k*&:URk$H-oup6b=QAn223GMcqGz&@ߞ4ԼAqXoวE٩>Z4_}bw|__Έϯ+ͯvS枅q Tf\]?xJnx3/?W[mZ=c YrRjN"j>u""h_ 0tF^|[pyWKX1bՄY" OT$AqMخZDp3K8_A1<`^I3En As)'u,u;M1# &%!zt$Ld2f {)yN* ɍՌeGޭ :ZYK0XDD M,GX5 3ꄷ廠y)h #˽ݧM1atva?l'KTq_Oˆ~Jy,vE:I6yAx,$J h٬lZa2_6`&J& S:=$*8ʝMdBSϘ8D,tQ`@3WL#,8ZIZ<^i/GSα;7Vǐ-NېQ cdSx.  >$i- xlB}wO͠aYyKq>o=e(-|f/)>w{I.IA[u宅?zO[mGةUXN<9p|rRU_n:eDjtJc'% P"uc}IUdvUrX"-!΢ 8S Ͳ `%R8A1r~856n% . UM_ҚVhٯ@]O?Vڄv\\ k,)LlDkHr0+'ଗp4 3*FZ9ya5Z`(3 2B9XETC]E wZ=+pZ4#-ڽ~miL]q,ȲRf/Uі!ڑZ$`Ns1(XV!Pv.,ZY,&W8Ta#/` QCb?zb9 ^:]@5JXkF 0DHX02R~0!Vi(µq:ƅw#3pZ~snG""!ma' Mduր͟XgZc"tG hY\Й٪Q`aX* )BL {`G2ËX$OdER s 5Z _O@Xs]n@DJN Ǥ"9YHAJ3b'L,aG!L<"=v?&½PϺvOYېVA,A:BKrL *Î$?#)f#Yj(%kŐ: T4$ 0iQiG+ 7)FsǍF$ Z$FF$ S  VG|ңLJO$OQ[,`tJcgܕO%|D7`=ne6 "\^ g6)EԆߎòQ(~}[ ]~nWJ 0ܧnS;ը4ƨٰҘf~w|cԘ͸Ж$a4L~TNXl ]b9{,& |>$AtkFEfÛԕ4a\WQ۬qCY=Xis3wX~tx4%oMC&kD2iԍz}Mv~Z!v1(bW$j$f+0 0EtN/Swj&pe)yuo(pb !zpr^Pt*RDLQ`~62#E$* 糿$K$z*~Wmjb.A[e XI˙)_(t<^^00qԌ#rsqlHeTZl K]3^oS=5BVyӊuV*xny-I[p:MN*n( p=?)R#''_uK?v+;oC~a+p5F_p}vzqgvPZ0}1)O-)@_}d8spJe(@tX`Qς{vhNl)W_E/˙{ "-˪F$زrެQJ?}`ݬNwd=@oU@וHy~g4jn93qV ?9ftn&%-F]N۳$J`3yV(F["GdУ;{{=$G!oS jƆ,?bh( @F@Dhre1scV- l ha[/hIgനX寠<Å|Vj?2%qYgr_/'Z')Μ eJѼ~"d\~"̔lbԙA)("$bІem&ƨBky^~2oL|2Ňʨ4–bY ŏyM>]>03x*̳Ԇ) nE\''*sc̱|:WR}.bnYElӆ#Y{ɱP 9Zr;9[҂ZC@e??p0OT8)1a JslnIvl +M}vimc8P)ayΠFbͻ/!ge62a8*}Qc/*VB(kB.0 d%5(XIw{mJ,{(Sﳿ.U?\;\xm#9ӿT5ȳaܔ0Fy '~M?N"ABUAsɮةš1ϾVs{,ɋM`Ư7_#TBيd[`="ewVWOӚi^.0z=N(CN,>i6>ٰJ2DK*^:ق[Ut x1[{~ejߏ#x/ꖨקHXu1Z]쇇4ʬ?^ٳ6&rЗfcJUVME ;7ɮ(H{;^Ϯex s_>^$LN}f]pۻpZ bw\e`-%E-\e5Qy6LߍyN~XK:tWV^ Z,KͩN˪8*mʲ314FI<4N8@;8Q7u^.Ω6uUm:L!Z@v%h|i[f**h&սEM=n>!GXGDŽj w}=wKhs S5^Kq̎4K:ZUc$+<`86qξ$6Q<)6qP"2"6@?"6@c6E=CbW0ȕaW\]EhwvՏ;îu+vEB 2*BYGv|UCbWP`+Z]E(]=GvWX5;c0+ġ{SIݑ]=Cv5{PQ3*]EpZM]؟IճdW~^.ceϪy62n&Ob]Ώn. sy/LgߣH1isż6\zo1Յp([4>#m蜫Ù,n!aK4o 7?~"%)&Vh3VI'<4FY.[3.[!ɫ|}{QĤC7EF$J~U%wMg ClPߴ//~>d_ ڒQuͫ祆u)m*fH3X\SOs8AX* Cs)]:Am&j MoKsYsj_shأ $*(gpQ V tARjX*7 T+6^Ԗil,˯O̼mw_fM4Iܽ1}ŢDY$E]}BYU_&=b"^D.'6t٭e +|VJ52jId ;Ș!&x3y̩(<,{q- isH@vW46"Xam.G,`bg'07 _SʣUM15vNkLȐ3w%T V0F;N˄PhBH Hd;!yXcgJO8[A䷰V۝ACK]-I3[<8[} O?h M6.6J].s+ȍ6L5oBI$I.z/鯾;qYq@uIπz!1Ro8''fb69`NU'kl= }}ovӌrmx)Po6N7Ow7Zwnz87bGڐY|zvt=[Y4hq1Ea&AOh[M>mBSqNP{e|Ϋ$=浉B|jdF#Z%\2 "&66%i3^(12Ʀ fXUo޾_nw7bos=kY5X#~;7oej=ԗ&ھ$e$N}GCoJW&v(tcff'O\soos2jϳiՅNr4Nɇ̓;R|Nɇ]fe2t29_ZA939zd)p$hF\:jϘ4X2á\C9BBq8rH#Fiє- ť0 9Lc+Z,`ElJ^bTO1eSNJmXF?ʨAZ>qУ؉Q *JFc.1_C rfZ23 :dGB̄J@ !THT{8KȒg tiA%#r h5qLA|3E?g/hXGǤf;%˥F]>H,Y@֏S& %tP9&x2FLNi+F޳.j`7#W>ۋgsXzKN;k>YZC0e]vٝ,Z|Ww@) Yg̤XPjh@7l- %QkG[A*bz;iN~kK8I5xmb:և'yx,oq#LγNt&_g '\ݺ<}}%S Gi4hIyGʑ}=zmz-w}~ngq2l Xdl mY}#&w3?ӅgsSDq]@j^ԞmQXέtToD2nntbOӟ5IFf6 ok a!荋F@h;D}3•.9Ge6h79RԒygMYPA "GC!~’U !\} L2'sNzx+ sBd92Ğ~&n(>n[q4>~A[|^,)-oYdۅz:[sS}ݵ)b?b[|BvO]fh?'sitC/,Lz?f{x|v?vBȦçxaɻf[Ww|Ct~:ɇ }^}ut4%4ǃOzG {\z补R{CJbc^ZŇs8gwtPMm6/%$Om2 V2f|gW0YBM 97]j\N'&-O:ߟ.չR~,Mv""8̀uH01$2jDil0k=}Nu?]R='˟15dH1p>iG{o4(2#:WRi猉t*z!M<fJ]D'&, " JF*i-pT 0kW%h!Z~mƯ'Nٶ3齬s֛Y{sNk3ps=ҕ&[㡴d4B:'&]>jQ"!֦\UdԶ:U6mʤHs>HFjͲW4H, Vc Xx >:Yfxzynj@zx|bOL{ YAd$(b(FR3׆ VK+3c(ƞO66d2pd-ݐYL,LbckW%%~I01iDZ+6P{`4K@)2ctII1L&g:Z:sPhdUdT2Vg?F:1 b58EeD"K!8E gc^k@2'鐘Yp$Y:X+qNXid]ֆR: )D Qs#H#EPs@)+WȥsuVӒcqQVE9​4iEdQx 1:s@e&3Sg }! *px.xXM;Cp*wco*qWFя* t? `{/pЯUWRnw_fbmV̂ `!4.F٘Ē#!F41ڛg.ھr͓^cxͩҹ38ALPe'L ZUj`ck:0˹ Ljv5cM&^밻zU]m͇=IMr2p=2UYJ3dΪ|TN\,!&= Ir0  Ci)"3Y(c3|vqTOCf/u*SOB lGE݅J?-ǫvL/tgkF/~/Q-VWGA0𨒑h `6^flBf:%QyB+di:3/=3 Ϸ)<@WqWP!imt hizW\򬙄̍VtzeI 7,e;z>,ZCG P>Ej[vaz Y/񲀻A_OA"8r1:"(231Ĵ+'Dْ(>a**ds'4: N^ XRoK {@i1Hd#{$-?8X oZi9M]ԕ$ڔF%'+~$J]^/nÏqz/\N76ˋˋ8O: sI(ay>Y9'Ͽ_Wq_0nr!צO "fbtoc$Nڸ!Ru B/4/oُ~{JL!+v鵾~j7^]]nS0xl  v -f: =]77Dϔt~}!5 g8~b:u'Kh 9re/R9etI2G$/(ۗdՋkvq蓹'xAlhqQrKl%tG$P")rӴm?)HEwth`k/2 hN ⶵRJ~m'*Gr7_n 0wGmLi]wwp=9ϖo[lg~Ȍ!q}ѿ>o>8.6jVc;:vRmyJ8tS^:|>|Fc>ʫ?eVzg: ~o?'1jddfn;vt7}K~zwk3n8^Ck_=j{>GkJ1LݿSsKxZi;̂y7M=3'wVSG޶C'+*wy@9 ~2[݌=+5>zgN]/Q<8֏uwOnm2 s?r+LU˜1Y GC<#p&Z-Rʃ9:Y)BdwոVEL<'KVGt?g?J1GjvJ%gҀevZPV,u^~<ȱC3zh1!q:Q@ 0^ZWlxgחoH@Myc"G ! bC$@yA[]Ӝk䟗8fQaM9h (zpT̖ zQh yc~dKhqA4 ~q׋yKIy5٭/C0 ty½l80ݱKdal2i j^1 AȌR҆^u}J5m-RǾTQQ㏋UZ<Np\ܷ@r]XR _Tts) Czynߑʖ2{GѨyU1=uxƯUSR/7k0*Үr/?osKs +kX#!c478lMM`)5h\f2LNH^ es&, WA3(ͣjH,r\zx$;2)!\ p0+VMLU95hds+Nǵ{ǭ/,P|Mʹz G] }=Q6Sx9W|W^3JzEAlgrAƪB3a )MȿH˳N~7u [ZX[w!Y4oC&TE v7wݭs& ⨧m4VjW'/D_hWtUJ'[02WLzdkqaRt,UΕBR#%KjVXIYbL Jq d"Vr.{X6a* IzRjQ;&-riفs旻ԆZ%F\{G9ȵgKo5_{ x[H.nx]1=^m v3hq܍/3}̍/7FXCoܺ1RJݡE]7F.7F3 8wUĕTUְ"5zsBlO]"-]wWEJzwݕ#A"'H\ TUVv] zMzw~ܕ\o}A)6w]$>E1K]$[W7VICUS)_$j1]}W^/x}Jk}JVg\[j6T]mr"030V8hjrvCmhaL2gpMj+ 6R0FSnȲKȌ0<|8a\bFq>ZDzYOVyLoܖ8Ջ`n<_:þsn\LM*g x(DVYrR{D"G|}SR= tRȬ"2Z)aY8Sy0C3μsqT,{5*#"cfPnoL$Hs# @:[nJ%J]pg> 1T(e :ˬT6f 6L+u[yH }΍&w01/0քɕǯ9 m{s$`gcPη~Z}GhaVRPhYYm! 40CGRqBjQi "BBK2H)KE8RHZ3~~R+ە'ޏ?wjq+b925wZ|@fokHa4*sH)6 (]&:赉\"&)Ko+;GoI.w4uag!o];磚 ݂o8ajz|FYgR2,-nF*4x| $XyȦ"7b,rw2]:u?a&ģyghx葥Xr!uԞ1`ܭdCWKh%EIU5 cB2h֫ࣇG47K5B?l>/VJKr2jz7lĒ"v1 +)XTVEkWIFR)5+Xx-_{-x݇b?Z&pU?;^ ՟פivS|sG+\ovqqip+sT?0 gKx.o:Fÿ݊ܶ){S϶ʚ[d;OG_O7eD_,|1GA+qs:`8MWR Ri7X5Ơ8qh/$%^zPՂ0 z\L$Z>罥Behu!$Iо4"RK/urvSaWԎ2]yB嚹,{x͡#_;8@'򨘋%GCBORHxMy✒Bh5Hl`IAuj|P]\0ZEjg8P q.۳9;yN d:t[oJ_@| ~s|朥9j<-l놾?o!G@ pBL,Xc} `J2*gdm EA-  q( *@yR*)KlH > oGd?!zw'2%P&vȃ_Vc^MtFy#^&" 2jZx0/ Im&NtV@ xtvO ='Ґ'pTK(4_)Nhs6e^1B@(juZ?ȓu/J>A8fxRd"\,((r"QmxgaуrHT^ k9I}RtCX2ч␕mD=J)۲ NdOR p Va@ R) *Z QiK\@2h,2.%*熞`0WhZ_tΡivc1A]26 txNGmg (*B -/{Eoe aά%ܪ@u\-U7f52\֓Ll<=A)! gHɡ9$ i@2\eځG_h҄spW\DD Gj:\{=p+MCJ77o# "37dMmqV,p׻~WwC5f-mS{45UelO] p~ZK\_>j^3}coF:0wÁj ܮ|uPXi`2|h`-<0Im<T}|td8&&yk6ikgynwp>Ѽӯ eňD8yeʴonsAWўNGcJ`DzFen y`Io9|4EA2`1+8ɫ#_^4POGb=sܕv1J)WJRa(50mLv2zE I0IB~@/s`HeɱdX/nzc Ng(S3eA[9T ڮVa{ȃFCBRJ)CmsSa1zϔJѣ׋Rq/E&G_Qg.CS.S.s#R,g@mиWlh\(GuhR<&< yMwC >x=pv>r:#fF9mRhO&0Xt&!]=X62 SJ"$I?eb2W d^9Y6k, C/jku zGz z%!.yY 1+"LD@KZyz;:He,JoKF+37oK{[ٵy~48p"gVr7~[w=3ɖ,5tw_,nXC*w֟TQgv % YzUP>Ѻ hXQ:^rQJ4hy !IZ3}tU<.ߋZRO)J( șuM7/rK"Pc5Z)7V9հ??W37sC}mWZZ̮J\5 rxJւkU2E8udJlrpM"<i,KQ\8 84)aPNل]j9U\Y5m l7)XO] b:m`=h9/!w-+եHa4irmKWWoz[eً\<Vw2xw|)gTjTC5ȒIբgf|t+Q֫W+[굛 )~jFyGaYp᦬O] |q>jd~Y.|O?Z.Ž=Xz+,JIgawW\y3%UٻFn%W<%Y ݓ{Ş'ɱ4g}-[%K^7 >^ .{mp-05 ԷZ+avlkf\]DkXCyh09 s%ҎѮ:N8az  Džh#(PFE,:iRɔ.IIN"jp6$LH F:l2RԪ 3dTVrvd5qVp׾9C i?^.?~KgX?;O4_Wchm=b5`:u݈W*v^v1'mbdfϿ ipfbGZ6ug}@[J 6˫i~V R|ހ؝;hE!)~-ѧw(As]PɤOm,~zgzϫ>QfS֛\N?QK7'ޠ`Zb Y2YN31qIjd90yȊ*>cd_^>4ʇ8Vh% qFٓ(r<Έ"-sF)9#^ g9!0/WO=vĕ/Lq4i}zv`7z@T\\ \S"-ǡUa8/NH`dઈOƺ*J=t*Rj9WWBr) Uړ+WIy*pU5wIBfy`]u$iz}>|{Yb׾f]'X|~vo]o>]1Wo]hYP ~ ~ˏ?߸ˋV8P_^wf͚&џmʇ\V5Y27kyDc lV1zHѻl}byFjr7aXNԏV\ETEu2B֠7 Y!5KD4Im]򄱢T_/]^Aj~L/(Ur.ZUۋ[75=Nއ U:2 u K6Yc}"$Yw A3FD8 "]x^e61GSAI "x`Bl#%7ىpXL3K%nFɪ)Ӽ=HL1 20Tg&>vj!}}Z%#m)/^Z<~2۷KDW Ӗ>ѧELW<H"K *#!6YKN1M I:BbyD0JȠQ<Ffc7}{yR!N#9;wNX &I#W[3QĀ!,;oMmZw&ttͿpɾ{/|>OɒڕD ԞFL~rs(ۆgg[ 4 nnj-ӆe%R:T/cRI4c wTT0 25\'נCxC|TiX:.z{ibB{;!'ے9sHc ƺRT`QKNKFU<Sr))a$ؔG>< r‘rxV5q:́Luk!n wu݄\.Bj-wqi]I'g1%evǭUx[/=ttg*=!;(5B~37$\l3V +uӵ_֧ Qf]coH[yp5?rMAt~>˻ovm6ʑsz}8jjjbm 7_{>,1 {noBȾE-gb=oϹ`ccs LH>}.Csم~,G?0x9m1.D1"Z%]̂2%^Ah.{m} d)T@NH>}בht 38cEei i)听-B*kUT28.t YA 6$ sP^xзU܅ (/ךPyVgDMgWBfi44jzriշ m=}BSn& +lh9KRъͅB;`rA񈌺Ck0&zi"2:ǘ/pL0Rl=.E3:%D j#c5qv#c=[Ҍph ,&p$f`rL!aDo(u=txWUMR5Kh dJKtµ[W!ŔTÎ'S[̢̙JT9քtHuɖlH$Wb>xOU>U7ZqXbKJ|BB֦`.xKZ˚ȇNM9'ЭشEʦGЭ1ВBO~~жO+?(=-cxsyeX}cL+VEI╢Z,P5I]PZE,8#jȜn&F'\34:e!j$YJ26$6<,ztܮq&PQo@vz{3d}R!Q%) )4P|b(:J+q:b ЋQuzi))꠵;PH@**V_Co<XZyIV@k%v Hk7q.of~/$M8 l[6}1Q7W >ޗ26 +b?օ-1M!=g1d >9uEoM6870Lx a1*ic>ul|0={F8=v %B٥ENga2)ql2e }l?@Q2$!J Yue3\!y~Y/e4 M4LYYaTZW"{qC)',OhExѿuM3i,rA:ɹjkb'v^}&C":Id9Ԍ#W #r1y)3)8_or^~`yYhsoGbZk><{bB~Kyo臊ۣg5oWfW Fe.J W+ip%bYWJ!*tHNDqVRif[- P5ZM$EMf Mth -6e[Ic I~dbGg~$rn}(bkk(Ns(ӡ'8N<\ف#zWir $Wd6g\#iPc5Z@+އJ:#bTz}AZ] NEsd) yዐAVE%2 uNM8˴Otz~{NAM:Z[h36,u$M Y|1Mޠo|rxeTd t~UhP eM([kv^AYqslWȒDgUb.ʨj` d#2B6ѯpP2Vefgk2Vh UrV5 I!u08;oo.sQ/uYRĪc7_F3bEw%7T6S׫Hz~$s\2>A._h/%  -i:_fշ&$dp Q{ϼ٠͑䔑7y4O=!Lxt+&KDc%6AVMT%f\[1}B|+WxͩkTUUY{[S("'(ʢ VLЊk!ަJkdQZ+HP$%;RU1ZqBhqoF̢_[+&+{p_Ucd֪&Z@gH3\RFj2Ҍr\_FdHh-/f;Kw"iè4|dCj}r dGOq^dЖQ_^n{d彞 {a9\ښ@2Ia*vt>–h/BIiURjoD5xFT5ZCnҤ^bׯkrKmXC$$V@+.kV%j#}ಘ4.%{NI7F1Һm LȹB}}ȧexҖNݓv-δ6̯4w{:w嵖yUMdVe-xuQBZ=*|t*/Zʣ[fMEUZ334z$HH; LOoaf>'ڪɇl:#x8焑gШL(YC{(8kKt|3J_[ZНx<YLXLq"1EȎA%RJB}V^gJ{fJPQR-d((RCEMAZd V!+7IY:4ԇ%l MZbĵjNYvFJoXpP*9T k{^1cc%FcDJbK\., hEB"^ 1E,c-c%s1N!@UuΠxwF+o4Y?c.,Bm|pŅ&"2Rq,}q}q [A^% Y3+%<5Mբ psd8O8!!Lص  5ӯ^z3e-e|׭cI;RN[x3ilpd>"֚c "=<.N͞W;*~v^ ~KΘ2;!i͠Xr9ՒP:e94cy|χ,Nڥvd)<&{5s1[G҂6/s$Q PJ&3) ׋dZ-)R>%S2sƱ6ap\rpN\ F3jhv9` \}! ~۟ no󦙞&Geqi$ 25^fI0z~wK x~a'hzBӌliQii!<} `F9RqJL>5Wg8+ތwCՖҩQר'q(ֿ],5~:?p1P<ͻxLN]h Qx5?_m o`BIb!\ޫ"oPC'НIhK?篟Ҫ~|3zЫ3aѴ+`LWtZT8t5Rm]JtK'0+$tV]+Fkt(z:R+eWJRNW)ҕuԝ+\WzQ Ruk {+k;"R˶ ҕ :DW 3thu&n(&d MA ;; WcWb+Fzc NtWѮ.vh} #$tutR9魲 h8`Ͱ,gBo5sYi5e;埾]S=0=¸1nnS M "6oʨ@E[3Y p9L]h{XҧfQv<ϑ^K!wa[ "7|\yS k^KZ~^H,QKJ콱ˠWhccC jN /^8]`xCYr Wu&Ɠњև3JzKXrC'?o' vBkb~z Jtf ;ȳ&d*]OkO63d{{4]s]-ltbmiޛ z .Rkw{7=[Xڧ3̝Km`ͯM4}lSta̹c=%D0ŕiM a@"ZlD(cB]6fct̒a2 pӴ~v[f- Suq-6{fOO-vKHZ3֫1`5a BIðT1kQA1`1c< 9C9{#Kgzπ(fbrO l{BA{[M3$da<9jra\q։@H1&*i6P;14w&)味8 A3 |` h$aD~Y.Jz&\u-x\l&5WTyKr-!Z[')5JeϛfuRt2f @+iv̶1&Z->4XYkac&Fpl-j2<<8 8:C/k}fZ .qX՜lŁŭs{?p Fs>؇ 'Φ"X'_/6ߚdz*6!93M&v>G;s+ƬmJ_k ]05v:(ϥ(‚Kvo3fu$ˊBѤi~ lkT[  =Eg hN34vяeő#6>ZJ Tj5 %~YZѽ{ JW @Pŵ R;1N-`͡нPXeK fCp-AI lᠨV?-v V2wH᱂,>C@f;d`vbЛ!ՠh(c¾oC5z  FeBE@6]o%$"XYH\uavLq)ga0X CJzL.KLqlKE(u;Dg`vv\e[s_^CE[/ acg;.0LB`-$L>|AuPLE&Q`uˮdɞ&HVbxZ*2I <9E 6$/Z,,tE]H) >@ &=:kXeS@He0G \5᫮@f[P6H܉m .d!T?3e7u (7[#EbP; 1d|?.OaJH6giX} ][{ #C+~uiSyρ yc.`/J\PGY A.ÔA,9jq* oюe>"[ހBz]K"2U5c(h3VOMR'tfX!ٔ5č ơtV% ^Țfd!ZP {(Fw=`<\^U\4>VuVL3vR@=`쐮:!HYPY+" 6RꝦ r뭛ޫ \ iɄx}Aώ泷%Ct(.MG7mO`E_qh"R0i >y8Ha}瀖&oN UG™J;Ή-j %f0rK\&Xנ 8EژZG|Wx hNp>EVn5dZA %߲uCd 9IněPQkOk_K#E_w*>]f'[En:-pWQ@N^k (z7ea2kA3_,N M F;`Q8lW;(8qK9Rw$9Эxzp4 /MABj͖^j:]Y bs ݱPMp$2ubH`_PGQ1*Y`j?.^ΫwgqQ㸦f!eyO#AwGG?{eO{cڳw<{ "go/$UkAv[wwyo?&D^Aoyq[n^.=7kʱ«׷Xs#t{q1= =y!\ߌ?9p_;)?S&F&٦3gg* !37$6' L$)&ؚh4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@:$C8-%8$ m& >}%ILEP$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Ml훜PHgLp{ Z= $(&N2 1Y@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 IMym) _v@Kq3I MG@@.k@pd5 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@:$/C{ݥ5>pm~sw~׮^_~Fn-8dOl&}p (C)mS+n;t%p;] JW'HWo\6Yv?n~< A] ъ&/ݷY~X{j5TO'{jgoRW70ڏ{P,?uvJג]d?rTn'fッ{*\KQz ]e h@5o_]ϩ؜o\KPSVp{9ڷ] %51vUGx3F~QYFxFP>hGN}xn":]=p2dW' _ ]= mZw2?? OtS/wp!̛+yأ[+Aͱӕ^ʥl] l6CWfJВ?v)] ],:]17m-c+/a+!-+6+K);] Gtu:t2K+fJƴՕ<;JW_"`Jl+Azg* w`vԕQW ʤwO#ܱb'\o| ù{e_7 qS3AhwNSq_3OZbv}:gn 9 lұAiuWGN}|n%O+Փ}ih}zJdOaOtSo g7DW8n>JЦtt%(sR:Ar&6DWm[+A^] Nó] /y\[+%뎝S:E"ޕ1VJƣ+A U1Ж W4:_+4Q*Zm[+Bpؽf~ vmK 3vg-{&"JkkN [CIUXⳁ$&W*IIUW_!\,9ړ| Bٸړ*I E\\L⊳]%i>u)%ꬫ8x瘤gSNΕ%{[I)O㢓\ti=[|Gyt.*.%Sgy^+5>ӅQYxwVٜ.v]9vU$`LZ}[X3qYƯk0j'gʴc)ync8UTПȗ4C ߔ̰ \?~ϛk ƫȆ4`|vk+F~Ml(9x|O[^GƧy9]˦p+}<[r@o/q" {]4y\f;bzD9:ip{9r7-@9ttE:f i.^",D2""&ZH0<)c"¼v.Fj$|%R{>嚮kEa/oYUAAbyg@1Kv4^.7uLzLA֥yIV;SE(V pPwR_uqѿ ?-N c+(F[3,r;Qz%\0ͪqM?Tе{H =ZYK0X`" !Xk*1j 3ꄷHr|Tdᗯi&X LWUVf3ue[4|ٜ~dBe6HeB *86s ; 7.f&_s,Xb.T{a)\fY\e^ drS\]^b"/WZ2z -t"e4˹݆q?/{EKYO񄇠d5%(aY$+:uzDj[r~+}!@I ']GAC!f}8 lp6HFZ{1'?E *R=C33*kCTW0峱?ag/xK@!ҥߧ/liٟamllm5|? ?MK\c=г 4hwv 4Ѐ>+TyXji0bImJ;3TJ}N$8imv=_a|οGMX՟KLg~/luanO; 'Ю'y0?4܍n7O%Oy9Yu·$\IXgb0%z=ͤjJ{; QbއI.1m:^ w1Qҵ'rHC)u^ޜ=Tؒ%V[[EFd>ZZoS\-&x#32x5sf"Tؚ8ۑ=[=,lK3B: +9i/׋T]n2(+J`t7/cyE {DtB*oبK8X(EXH0U<1U42D'ZK(IIlKґ0X5q# <nM;Em2j; v%-ӞSCT(pCGA_"ur N iixY1 !f R 0>r i[g;~t O[ӏCC7i<Q5$h ĨhT1N [ CJ[۴Zadg($^2XobA% 9`҄Ij$f.2"&vDhj X"Nٚe\.vI@u(r0.h'"Fۃ"jqlVL9*tx \<!0Fc: ]AxSpY1D4V CUvKp3"ا"k/%X59)wK9 Q Ƒ5qvqz!zg.2&-wޥVbؾ@Div 6gx/x82DÎ{IdVDT(TQrF 5@|<h<zC/ y\Z \8L)_J+#p봚 +<#/CV' ޿{QrA3#z# N#];G(XjQJro1e^^_8ֻNYWM>цᐌmBev v%{ ON8 Q8@vNc RD3Lzd)0ZPѰ [RktI3oB`ݽ~[\m˿(r^.ڴa LسG_zy PZ:xx}Wf!Eٴfxr_ᷯpKYfTN<>(Q7&Z= #E2ఘnn J3j )I'@)H.6DpcFPoJ 6 // ]7>b|]46EG;X06}wSPϧ}n*)YV.nx5u˻v3ok4XϥG^5[}PA;>ryfyTTjRٞh,u]OB2zeꚛߴijnjl{Zc`!@"VSjqYIVr'yWZq' DhZ wu{T&TB.d)V E$xPUk~86X`IuSUߌ}ŒUSISuʃ{i^<&}uz~»>T# foǢیs^+UiIGTJgm0L=1eg\zj^I<^hrxƹʅ xJc'% P1#ѧsA% F; cO6eJKpbj Tܲ&y:3gQvQu5zjh:h2۾۱'\0@2i bZ$-ʉ@48'(M :JhlwịUh4BkM(x3hQ9bZk) ':UtCZg{^kg}x<MbCxeRWt31K ۂYAoSeL20*vjxV~Q=x\?O#TE.||$>?ӿl%𧻓>MZqʝ&\7L_ /~V+)?K쪢lϞ׼ .v FIW\sk y<­ a r]e?Tʧ_n۬t^0kݬ3xXT{/ȾeVAt\."^Zm4XyEژݏbpyߞ}<9ϭZp{iIc3V! 1x|N!N)t|Gu է؟S Y=m8L m#YO #UuW $,EcџZ]˔Vw(EےF8~-jH>}rsH JOdc ޖ&,s`68ũz傍ImwRu#1E:zN$Hąv+6e؉:f'8՚M 6,uȓ_ʗ6wo+[4{i['?z*bQ6+E> H }dt1d)/߰uyK}z9)3&r^eG!LjڠQLbi3v: =u>. J ˍfOO:yKFLm9ε28z:C/kPUL&ߋ/>dZg"EV$St֑H4U'bG_ yҔ4H)x,2(ݶPd+Z>:hc*XVC^Fzx2Kp{2lt@8R҇ ~^(|6E9tbwQMkPm|է닛Pjѕo~eB\+N=9wo]YeZ@MහG#> ED:B/^%%:5t!:Z\2᨜sB $QG Pm4.q9YC$,2e2^wEha7 ˥byrzӦɞ|Պֿ㪕7RxƉ{H $ovqӛ9 ~TZv>r۫p)&۲oEftŢ[byu=eR!v9ɧNN0NHe,Cf$%X" dm{'g٘Т"Փ#4Z9%Y.SL'_fMmߤ,R-o؊wR(9"dy*Ť^'^]AQ%<+b&}&=O@V'ٙ<ɿ^_DӉ -;}_^<_o2 ˖_ L=0ӹ<⳴Xu,vZ?3>bs:Zr\ZǎێC]}jNMnTC~x_chmX0.AX),-s$Ffщ ǩa? oo n,Yl''p[KkuEgO˱I+كJ9dbd.8P(uAIWήaꮎ_R8^D Gou4zKLEC qR#BNE'k kPMEe4il+FM329&V0Z6 JC ηch+^ Q:?yEz_nZP/vKgwvE07uSLzu?zQ[z׫ym뼥e^_/=HՌEؑ X]dmKmꝤzw=~וQdWЭG3DUTjw3'XrZm@+XGHJkP2TcJ aD ѩVe~T1xݔWN&_,?w3iҙ;h+f}'.IbIbjdHiǎ&heXNGr4ZuZrTmvlhzvzuo\􍽐[i޵+.흮sN2][?]03]\Gc9 KCt^@[w$,]Uζ@[XgݥdAuAAԈe֬ UAu7IIIme }g*5LD6߭8iM H)#_IgO֨M(YaN3}yq~Z1Z-kdEHx2sLANdC)Y'A%BqH=6-RL3e83%#U %;D&DѰЁJ bl Bŏ(ԇ[sTd:Rk UʵةĒ BeUr& x(=ұVd[tј@(V)2 嶓.(fY/RLі:ͱͱJS]^Q[6̏݋=^z~̷rK_茏s8uƜ^>Z` Q8xb񡋖8\iTSKZlŞy,wn,cY ^%JT̙C%p: L:kTzw> !6>N.\; <qtm:*6-kѲuqўԑF@]^7YtAdqsE|n~֙@m] Q^;E"$^ 6{$ꭣɋ1Jt)eo9A9 I|'kDlz^=QzB,FnlX y޿.E*}qO*Ep>> 1դϽA9~ϿDP[:EBgd ѪɓB 9V`IؐDu1ɨj*QpBrYcT%+LJ+ '^] sA=֠i{v-dgZCq Zk"-6$Ib⮔Pvہ\%Ed2\MɻRoo$etKe%+o'fO )v0=EZh\rc)khz^C4`ֿٞbDt%FhtZ;jh=%Te|tE<шJ3hU~,tВ:]5VMtute I]5n<}m] ZD5tj(UY}UFjh_;4rDWHWΰA(?jy;_2,}/\'0C?<,%Y.t9u;|`?}q|/[VPneJPk\lR~Z>Urި9fU󹈭nL&;7z,맠5C_?JfOC8wz`{Ip3]=-'DXT?DWztЏ`6+kЍZUCiDWHW#Iw!Wh,tZ3tj(ݤ'cRW ]5z4tZ:]5Cj'tEN3^^JW\5gjtP2LtuteXQ+q<W۱UCkj( "]YYP^nL`;uК; ] ]yg̈J{FCW .<ӕ$‰{ D離MVM n+.y]{yl,|Nt~-}:&KuW /6Ҧgp?֙ݜcWUQi;C]^SXQ0y֋k ,̺yΛHTN{{"uz1nk57T;J]4ŚTtſ|q+p*ןUzɊɾ.hqNk; l;vw9)"|3e02|#`7G h>eTSqCG"_4~Q'u.ZrK'OtܡGDo܈J3h0jh:]!*ZvBW X] `ǣ\cCJ+moҕn}v4tz5~骡tn+R] `E?jh_;tPndZLtu8te+#CWW+ ]5vtP~fЕ0x5 7J7Cvjxv\vc+ApꪡZ≮vBW"ѕFF 6[4S0h8Ӥжhw9cOoo#_v;o,&;d_cm$ˑeO }mrr]rG].CEVѬ,q;MV%kp^4,ddx Df^Ժٯ(k^w R5=S۽(&8=8Mn.qT:䎳qu{삫4=qw0Mn\\֣J:;D\= @`m0{nZ+QiXH =ap%r0JԆ'2.qNs WlW" sJ'P_pu*c•+(ZRfk[pu8r[;JzWP[!~=D̞ip +}o W6>˵Uf^i}Sc7מRnoPb׮0-r0S vOO-IAb:x>W?.WTk[/7 Ԥz3OjZkK􎳟Դe漟Դ嵅uK{? afU"83z0k >U'h skdbwEo?|zۜ+Ba@g5__nhYcT +ȍ\ZOqkkX\+޲TEX$8}?Zf\g\MS4s{z=O/zlkEqw6MbB`W&Qp%aW^pu2E]` a\i*\&k{]#_Cz!A) %+?7WקBkT㝔U}+}߿8[đ̉|8Lj|߮Dnlu796/)o9sj wXMZT^_|svsa =*_+7G r'?q|x hk닆!g ?{iEU(TAW߷GW|[yӣ^I>ov}y~#A/(8zϥ`A+./ԩ kd;P_GF[vߵ߿|7o>mu]v ێGOя  dDpǬA.arKQgoDܮ/fI̚%•%lWKF_JhQxUQÕv< DףJԆ0w\\ <)7}g' M4\7 Dm-:\}tqmqe˚V71G|w>L7݁GώٳVbNjvE>nQY?[MǠUCG*&.s:]F蝉αſ/WE?Wu?oΕ^r~vgRk?}w}xO5̃.}{ؽ]?.jmi⻖Z}7U? eon3#_$U~b>Q-<@x\OgU)И>?!/QuF{[V&V]CAzNj{UyyO d y֝ؒCI#w\05Y:ˑQQ6݅DP2Br[}~x{s'I=6{}%~{zt#oqr;{u}r鞨(W*d^.d-%e ͜({5ףjQYnU5fOBJc}ʹRVUu^ E&fJewx PѤ+C)玦az6EBȥѫI{']E Kx("F+$|0bHh_?.֠8ü JƜ 9s,A'9F $W-wvZ8U71 #WHR[M1쒇%]DDNE[RH1k u5zIh^Tڤ6"GKaH PkU\?Ќ3lm6m'r05Ţ?#qRL`U} TPUW)6QO\ )3HT7U.,)CRkR3uiLZ%2]8_`$6XvA]e1(wdi*YrSuFK 6"h c ѣ1]O9e a/v @nJ.9t4'0Blk`)`0IY,Ck(a3m>%d MIJw yt鵈 +-fMbYL[v& -{t-" A9R\Yv|Vrb1hN!3S*J.fWbe@¨U8 wGz"kUg"a6,L]%1#, CU:Qb5\,ܚϽnKrt5lih$ƛEjr[U Y7}XXv6XªL$hl3Ⱦ{L0{Ǒ@?f# g@ X_K2$W&wvHKzltWWWԯW A0/t.YB`~a-4Ιj0bT`'a\ill$h (K`k ]܀-d0=06:vS@ہaQ5$Z\4)@p8,YMAo8c\ pGC{0߼W77Ap vI <7 Rc~yp -ܤh+{ʥP4Lw`PT"'p'xʍ׃ Ua$ V! .߀XD W}n8p)NÒkZaX8 ;"â1!Ei0I5W=kցڧ`-syڰ(RV0'#sJXZ#eH 1Vxp|r\|gR,J3SD2AUSEc ~s-c|a7/;uǂk/=" 6x(sb1 i֯FRwکa5KpoyV%(E'1VWoٞ?X8CizvQ.0zˋczzlfưyXBKl1 ' Cj|g li9^6G)f~Q[!?g[qQ03:YVmu@u0W ۡCxEQ\I-ހ+ q: <f\imn "+"+"+"+"+"+"+"+"+"+"+"+\,[<&0uD燂F W9f2= WK"+"+"+"+"+"+"+"+"+"+"+" \9f#pgx+>x |"zj"+"+"+"+"+"+"+"+"+"+"+"+*p=\y0FGtW \p +b\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEWW_ zm+z[2 +n/S)y UBz8k{D|c@\ܱa cχ9| o{|աotgIxJHmV[U[r@gիդ]I'Ua=`BUL2Y77_ew)8в9FW  q"#ß9\r4n^-ܛMqiO (H_Ot.։]0 VZTwR;j r $=WK7b5/Ňl[-k][9;>=6w/tSKՅkS| &Žx7 l洤54l TꤲP;FM1o}jc7. MNB)5L'We*Ƃ,Rr>7XϽ&o6<FZIko,AÕs'1Er(7=P\%Ci8H"M$8{rsu?bsuOqӚIkӘIia+}s\=t蹐#2W eP\m\B) s ͕8wD Bq5?s~ tܒzJ X+ؚ1W(;JJƘ;tsR RRRuD BBq;s]FzJKc<"s[)\JBi=tsRZO+܁#2W UǓBq:s:}டs4W ]8O(h2Z\|j _SHN{1p0Nxv>nw)Ĭ.%ԸgMeL5٤`}Ivv-xsx_`G8~>Oᏻu7,|/ rVxNs%}|NՁww;aQG-QV1*Bda~-_e8٦:*p50úCyFrxzug ^{߇ew ݔSn[|6X6R=S]v:o  WS&:+w [|7(\:p.jAkț[9G{Oy>esZ|.#p&L˲~jSۥ)ֻC%+/7:zY bN! |ͯvWCm$*¸ZYᐿmoӮ?`Nć6_۽V1wmgH^U܍ |2p&8㒉;yUrNbQde줌}ngX Jm>+15jj8VK-WeG/KUfXxa_S:|(jwnUlq< ̝7E<Ma*9u RQ#-0΀/ S0( QڹCV`4v?!Ջn oGUm47룷'틌{& &Zu a L'd`5,6en4^Q8 kRd,gT߶ :oԛN\ ؝ßgWWӳٸ䮋{/lI9w3sv :}#ݷ|Ih/Eh;No6[~S} ']V_u`h ^ W/z,1:X+REjQ b!W* ?ʂ,* ",_䬣:`V|d\I4EQgYɍLxr,XSz}s@oRU)ҹ58GgD-16:/D)::A8, қ8 0' Xٸ&!F0eB&"dVA;vfrU&˼)7Ts}1h8+ΙcpʗN^ZIE d:׋)55j(|y(t6ftMJ=Mbr#=1w 1ss(˿ &=C:>w\}k:H}xj/Oiu>iW: ^H_o)ۻgK\w]~U1 OL<QF,V ,>I%6ב[jBtq_uK>:/`I:OX*UNˤZuT~+kTC+nlL>L&{<zWo}ɻzU^=+b}^B7X4/hmZ*!*:k<+xjUʡ־-6-_~q6uͺfm}-yZOiIjrEr~ў'WNZsf*AH[]dNjzW<*32jit%H֋z0]R7JPG->\943w.9\eOo9UP:Ĝׂlds7qnϡe%{Xf{S\)v.`$_og=YNnl{ف]''Wa9Сגz`=%^|0V>,s`M4OTt'8yYq,gXӉ[eZKF( J99dEybhD5-zfrf\0gY ̲g>Pgphj7مF7B[K\O>;He\;*{c ט>gnU{D/A` FqZaBhGR|*~;ZG~~a[7;zw-?E`Kt/^`U`AkoĂ{F%ϝro JAiL͒53CHcOz:*R>&z"cP<,(,좂du q]Xzdy,!mZ ԀR(te! Ev]M4 "LM/>}Д9jwtL-W98\|%&z$\ &%'s%`Q`KI"&f⭲x;rٴҳ~}q:=Usq>o{mä2 z|Y[DЍJ7kr0xf}dzn&}\ sCqɇ.sLɇqTxTv2Zg5dQZK,rϒЊ#YKƠrWrB7w ⼋yˡka?B_B!|ZքK>zF[ɠjBEWULGn d\+|:t]6k.R7KPEzz (c#{ݱнd9@,\( pD3 1yad.XRd&DU*@x JЀi!KL.˨2V "9#7$x`ƑP2&Ύ)3B3M~rD^(ziXzB87yfwN%dNK YOEm m -dsX)— -㔢EeWICե:_ }?ؿuÑ.:FͲrۮodJõ~~']DY̦遤Xȃ/ h@7l 퐘m>YRdH;:0I&f $~X7 [iOވR`RPe"a'Sk٩T!i9^Hm_; cPjc[W#l&/U/65B'׀xC|Ti޶NqME295xSha96h7X5Ơ$shșG8&$D_$0 HnS18-H\dՙYV Uচ8;.5yF۳WSދ۳o_cBTa2aW DL 3o9|v6ծ_8"}m")͢7ݫV4bCN7rr}GI<62hd> l=łMS9ؿDZ:eXNA.uKJSv$ $]&L Yt/޹Gv0L\osE v~˯>qq? 0Sfjk- #]1ZcXGg0#F:t(뾢/ۮkB%+~|۹.$j> ߣ|(<1Ϗ? frhuE8uSk3%"}筬^P֍ڍq%7Q(A>6<Ȥ;@ߟsTWGԛ^E r"z6b b).|fAŒU Tڮ }L3# ?Pp<\S3"AH@NLkCWWV@E &g$`tT+ZنHKUC4JDnW/uCyzYkOzZzrvte%N.mDܥXܥ܅wTZUT!mMZ t6Aa*ڱVb 8Y'ۨ\Pt$zCmUdT$fAGYrAГ u֋R49)*eXMڜW*f쉅]ijXo w;XŒ7d(.?>2]ߙor@Fp8]dLز69Q/UƄRJU*sxid sQ0齐Tl`Of6L36 bbQdkS\U-qFl?5XPvڼ2j{M㝄$E&~bU69edUAsZLZ \erCX)hv@21CdE4aMLR&("Q1cdꤝy[~U1 b5EeD="M[$CՉd5geN)Hd&bBdJh7W iȡ."ZS6N۬4.8JkLdI iA8;H=|uLjZ/.ʸ({\qq3dHCAeP8Zړ'̐;o/x0!Az\<. Vӎ}*!쇇a'q Mwn >(( px0O֦nܸԫɴ(Mhey0M'*EFs`nkpOZ~+b%46=h02EhtJ˵,rh̀}PoS)ˤ \5Y2-% iUR Q&ci&5ol1z.(Dᮟe.;=YJ{+=8@Ա@d2*:xL)!=(`)L)ϜS:Ͽ Uu{ZrtLq{h跗>NqۓW~rˏ!slL!'%flTd2g,"zaƠBT=".wjC o 9Z *N(eV*9L6XnK&;7Ӕbqm(P|ء \A#.YR5Skb"<`2i*DYmFzeBGhX쁧LfCF[CPuURH<6$2іF.ѣ)q5W\S) ==7F_3K2MY1 kwAZ}R;)f1~[~oXwņ5+A zJgږ?-BVK5@,nK]oTmM^BH2ϜX %lXe$+0|+Vsګ0zv p?x?k8xءf7*xKG.ounAm'u2 {!!;km=%5Kax~5+aK_'eQ8άg>FCY /r#*RBs6h^  \ݗM A%*ܔ?]DSyѦ_ n8"< xo%}i*`~Se6?%2" b.?I8 7u~[~6Bҭ>H]?ɸ ?|T^GW4sNWr2q]w;%^mDji.^ ڛ7~(jz?w?8҈Ѻc^ǥ>qpf0.wCP pb67&}Jpm< 6vLEiF-VwzqIxA'w48`~{Ѷ}7P6a㯛"Kp7//_I;?p,\ѺB)-7fN"tfz[>3])t6q[x`U:̆Son.FcnbٚK5'yj`ugN]ʚ_2ze;9pS-Z1 ^p,s>,[)}vAIQFs]$Y4*U8K?1"\*&D: m̄v1/1zŌZ` d!d XRƩ{ N8?TI˷h>TSy/΃%Aσ>L4H-}D-Q"DN~|) %pdrߗ:sqƒBpʵQX9{w;%O.oXWC5KP뿳wem$Izuyt13^2 CSM%}#!/%-RRUeDd1JD&p̣jnDh:vF|IN€adc΄y&s0telx ,u)f*EJ)"t%2βYal]|W}X(}&y_p\(w>- O=cF1g+j$/md*W9vu*":ww Kd? \a|v[ٶsy88+o9n7^a}O F4:5 8,=YE[畻m;UyT=]6a9P덲rxAC )NJš>DtccIUgUMsZWcgQa)цfY `XiINPL-"cvS#jTbnmhw]i3^KkEfrT&D5s/W2\1ZSi~D N&As¬RpMD4捴rh5Z`(3 etۨQ1pB݈/9%xx<Mb1$7)6%vEgY>e9l.d^*f:L:gqUl>pV| ;={WޙSчK5?` ~?KW?4j9=,S?Ө| IӌUjt&3"@΃X:;MW]*;\Hhf->Փ0-iBb˥g v#W<3\FWQqDW.=SuBpWC@>jBW9oDb'W\}2P+ i*Bkm1t`&&rN@&gTlQol@9VPJ}0(,pTNnV](](8L0+&A̐䥽Y[DH(򅖜N)bxЄ%yD_Iy^P0v(UH ^ wE%` *%?1i^@~l G2?.DZvy Ďy ۫TDɡ.OUQa;e͈WǕ_IdIKڎYie&#QHYu2LwZ D佖豉hj@969jItj~Rq8HK흠h%-Yn9\2 F,>njh_#vɰoC7 æCeκ)oʱnPɝ5 >&mNU:XIJWo/Ƈ 3_[BkF$rB}COb{ޯQ>iij%tL@s1Yl[hN;3E=:c_ZE棎vѓgs>ynz~H%*ڟ7&@ <|à 6.8֋eWC7W5]hƤ}VOn[Wa+U+K>n{Zu;}=~5F"ŸB뤑j%.h(:9^pgy=!b <* s~]`@((71{+%R.f#gqeZ~cONova>ȓj7T3JFR(p;6ZS<# ׁ+-YQ>ZR7&)&=)Q[M*Ffd j0DȘȘOWɇ8cO,(/6PXp3IqKtMՓ_(`0 GlŘAxDdQHI TxI E H+6+bSE#iHƞQk % 6`[0%^Fґ0,- Lb2#v6r6# y,;Emqڝ eiϩ!&p#@`h9B'4+F.qVqB3 Ƨ$(QpXG MٌQ? q߇ǂc_D$t!:gVs0T Z<52BXpShUT!Ti]^D #$$8pFBb%#1)DXҠK0I̥˙19*R[rtLl\/.̸H;\pq#ԡFtA;1ĀE 2D#X$|2<TCa6Yf}b7Jt9㾫SZl陼'# 8&蠕)000c*h*I"e΋6.c:  wH)B8,Qnv+SYfU4 `"{&CD0)RVIY,-,xGI4 G2C6r6z!jϼiOdNKz[&﮶ȃ]ǭ~Z0bs5vRvk^jx>bPpJ@a ;% YS6^pREɕ$ Ԝnxr;iy^ \8L)OJ"pM{nq%Xvȓyn RǛgFz# N# 1Q "QJro1e:'3z'-ºZG!)CI)mfζ'2ē7?n4U^8(|^=f؛a LbB/iFīv>חxwfaDfx~7<ןU^~9PjҼIN7e'Ua9]_'TjGӕ6o/8jgnA ݂7kpp=RU&to}Lt8-mo_*5ob<"VG`o +LL *ՁIh eGg}72۠j[x`iAmU}|t6OL[SYIS=se#v杼.9#>kye4379chhlsZc`z^m p)8s$ +AN }+YEї H,ñ7<]i =GS!8fhSG@`[Œ"9<8ˢ cɰ^]7^xXuKRJ~O겠) ZJAzLft6C4`^m/ $+â09:ȿH-k|DyvN^-JL>y/a3! =ᔫm)9{) ʓ+VR& 1A}0yTQ͍QmQ'(W#ɉs1[0l]iBۙ0ehȃ _ɼ*0cfa?uOnT˞>pV|{}Vy{37k>у~Cju3O#8~iOTU|j6`\T D-UTRjY {{٫Fq .ZI½U ][=jý 8EU0{xRՎ٫~PON`Pӻ:>3| z}S>lWAgbS~nݏnƳPpykv?k.q?5#R"`"]" +1+1[ckp?f}l}&y_&&VQ|jj#zƌ!#1@cĜE NVͭd *WC=0 ]Wu0<|Q* {tFCuD[4Lr}Õwum$1Av.H EcQ]-Z 俧(R-#yV ggO}閜Wm5s7Hפ`FjTy-lǻVYHO4zL#@*»)>_&ySꀾtUfJOW] tXEX'@Nh$#[SAF11 >{&)&Kp5RnnOVk#ww,EMhҶ=4\%qS*υ.izA(NJZAQÄLHD^pN%tD(slUoahFp5D<"徙"aM ƬcmVAk'DjV}im8k8 Mݒ;jk!iGr&6ѳJ(&-)SupEsR+"7[ul'\4Ɏ}Q_jiz"%CʎM |4X-:E#UƑH*.{SRPibU ܶ'/El7o=ڹچQFZH{kL`/J '>~7m8gY/nvv:mAe^\Ի».]|B q򇎧L4~q2"5/2꼹ZD[UDS6F[YkV/sw rf(.B$ǽ|Mbk%ͯx`v+geA[#t$n@'rF瓛|5x7~#NT_ZU U`=xUa}w_HZdo֮h nJѕ:: 0f3H_f7]9vc^1Y5cJq"V YW<;i(_Aߗ4"\([^R㶤79s&fYF5 0:w2_6.OXVb ?`y(7Irz&K^HԎ 9`Zapoҭa'8L=~ fT=U$%J]rukr5$e2X+ ^CeA8Q0ʂST$,wJq QVL*&(,Ű\#믲`eM̀7ȪTĘ3A~F^V1j&XfvWq)1EVeL&K-!"JybQ,%ˡ} p~TA\0EA Xax'BE\)\ 9h/eT(5Fe,9 ;֗(')+ INNhgI$F d!apeFЉ ͛]]ޣR8`7Z|9+w3c`@9LڠףswvKn?6\|h2h;Qęe&䷿fqjc"/qwvߑ7k }9%TD,ߑ/Y>}$M6`U1&uwHvm ՒadxSh֫Gq^#! w*qZ~ūXL6Uе~)4rÑEQZ{׃#,[[h7]lGy ;Қ[lvsxn2uS?zUPU'nt,BSX@ڌj?lf5}0}062J$g|4ɠNZhs*PJ;'@,)3{b%bQL د˘!2 IWSd]IYE/cM9vpdaW_mnȼ6{Ek//lgkBҌH*\~RFmAf^ÄWW_Q6^M6\4݋Az6Jcv/?aZN|wu'3tyeZDPA-l|͡S~nZ9^h2;ty㛉iugFdD{.Lۤ9iuc͆9LfήeDVRߨN<7&_/OJ=7Ǜݥ[~Jg&49o6>KzQzkb rJ";kT08ev% E=iڭ=EsLXqҔ`en^f_>|#4/B = ]T&Di '4S [Zh~'Nc?4_+5RI.r>h SQj#+MRY]c7(>ټYmX=t?rx^!*%cP$ֶ]A|vhkHr$ [,&dl|2B0*o/ʻDoo$<^e)/Q6{5(v#7A-ق:&7մP.xK Sg%v5tb&S[ыn=pDo=ڷ;30r6.r|1ZB`¢ӵՔhQ1ZƦh4EV@?템פkhx*yԢ[َwLJ҇mÇOXGw('fӼm_M6\jz\Ѯh6JcbP^P~/y+oy!1^v=XЗ2 p112H` 'Yb4qPb b0|LRM,#}~NZQW_]}Zi'ۥ'1J'+sQʞ}-! ȫ(9v:b+1W2j3xDbc)}3EšRY{BT5+<7ח7uثկ1b_1 'z2Yp" kz*'2id}L;R?_`qn52%iڿ>Cez ő5lYdY3S}ѧBQ$nXMp)4#B|"2is2i7/KxYϒ]I WireN)zKRUsTdh wG0<xzulV̛Γh@v˔!TP=N"!4񠬼BYKx4}^bdT,rr-@`mȆ@Vj5ӝMh?)ɍ1U 4 Ңe!RB SY|tkL?N:s:餣*A!VH!"'(z=B`;ޘ(Ѓ5}uwaVtIHz`7Y#R&(ŀ'`ȲUܧUܷ;#s&`/8W`T^N`|53,jčrqiI ܆M2L;:[NWE7]6Ū 5Ml:b_}9QYԝh/gq>6տm 6^y}59>. ,MJ%M۸Wې܄gm|^b[p3328:ʩg'5Y/W{':[G;o)i޹b-&mqPꝃ7''HJC(-GQ[e]RB4F8xA݀o}^!YlVP%9MN$xf7 +MR9ma$n6wl^Stbv&ξ;s=k0=es~n㝗5iT:  Hu*s~* kRLMF4bTFU2 h,x RȾTLxϚEcK^wtH(}wB{謗WVz*Ŗd!%b1? vj &yb &QIdC amEʢM9̂t9o;6y ~``Er*ݢJB (!<Q=pd1YY4drBM2Pb)hgX > D]ѥL_'247mG )H?H8gC!0wx6Q-bhPf eg؜N\(ZbM4IA,3 ? "m>Y ;@mxᓨ oq}pvbkrWK/v! ky5[(v+͉mi?O+nZA7lE(WZM!5(ɄE:x19|z~paˆǖs :up#5/H9+bF!ZܝDynV>yרk~@AƑ>Yo9!1O1;P5wJEo1f6jsmy۷٧/]pC(A ;}hQq @A > >K#Yx-c,qx) LJR@BEETZv; TҙWZ6+KXtvAE`UEkG# :#ͧ~4}7%wq<HLېTZ䦆aC JhgxoI247<䞞 L+!X7=[N} c~b)KN(6!4I'lD , `V~ٺl_ϔ`Zic5W@0,mBWxd9 (nㄵȚ@H2Ɣ+R,c9#C<꺉wg<T:149\hFdQ;8ىUּyY]~d1RK&4t?>oKt=b?X ֩ł\؇AK'Ղ]3}`*j| rE4DkC8ŀUH-$߈1QQrZIq/ x^clu^_Z=kެ}^n}r,bGb;#|:=0ͻYzo1: oGx?1td>@HQ͛˿X=5W%s .ozu!q!vH[s87: sVs0W)8!B[u@+ϮϢ~w/JiCnBɡ(^:I; _@?=ޏ|F oJp1g+͢HEV R G,}be|f{W7tt4-z.0BѫI{Eg4DA f𵱡lҮpoMҲdg4SLm3H2y&sc]ռMaʩ''ғĢxdbq|Wu=kޞ }rtzMtU~( pŻO cyV:m($hUgvI+nS~l/e=A][QKڦz.67"lfQEU* uu*/6ƶ4}}]H?bۃuMvmյ ]뭇ZªgImp(=Xe<3t&}h.<4vq.l棌9w{~[5&_o_..~8_ݼrcGʴUkmKr/۪(l@Jw*|B_)B1djvDρk;LOySN4׍]_];S\=Z8vdkb2}W'7Mj B)o{r&ck5Þ|Zʭ Gy@5mgxԹx8UԚ']}8}oOs1hP#X'j~>{,BBx$38$#$+٪rq5qG6>Xdxn&b;otix&0@,6npd#Gbf/9uK"E(^,^|O<Ӹ yt5*nj4Ub_16-Ocx1;!C<̇GP؈QTl짂J4ΦKeI^zؓeĶZejX-c1] z7ձJh zWMM]aeTG뛞צ5F"b:<䙿ۺ Ҥ?x$ߜ푶ꝼݗm~ד~zY?ZfI$g y )>iEϿ]J@О o}0^ŭx$^\>>q?@[RݯQ_ù 2ۀ{}Kx#oi|θִۄMūu8`5|_ݦy|P:~ntO?_R}0Ϝڏ»VݢjKco3 ^]7 P;)#VW%ooWxq 3Z'@|8ۼ?ͯoߙW}m@6}VI@K~ʏnq~8\\]_KXXvوW5oίMpSmv7+O۟{dU4n``R4n\R%!qPX flxbl0"t56it52]ŢC޺m1%b`2Fע] T[(jrt%ICc] J(]PWDT+ؤKWEWB]Pt5G]!4*ҕע+ٯ] z`|tEA^xX\FWB{l>_WBIjut!8gtŸԬ]1-{3;آ*QS Jpjѕu%j4-30W+EWB1w]1n,KՌtmܙdwąSR$B/pу5G>^=ОW.V7ͧ Y\q6ݒȣgU4p}>.L:yΡÛ(Jj٦M.9wa'q;rix]H]v%6&-kfru]|N <ܽk\.vRK=p?kOj!f 䗩jSqn*# i[O}T3\='h6eǔ`ʶɥ=>LNr9 7M5n-ىt5)/]JEW6%~ZEѕ:EWBc u+vVב] -uLEW3P銁ӣ+EWBKُǢ 1(D5\Zt%!+Lj"OE`oJpѢ+!w]15EWsԕсɠFW] m~2Ȕ`uhqŅ.T֮+9H04;m)SN04.[DEhXZpbiu&ni܉)G4J{4Y_旃&9ڿC"jZCfz)>X)ӕ}ޙא3@\;@c(;wbi+Ģ9 DIWLz&] m)-9JN:HW]]f2(,RW48W+E5A@]Ac4 0ѕ] m^WBf+.aYl\{j)M>!}ȠF~Y]W4FK鎁&O]vPX{5Ut5rpTJ%i1e!Ҳ  x$Ӵ$!1n0e&'.PBY'=39gc!ćG&0\XWhDz(StyЕ-: vW+ujѕ:]WB jr`Ht4y0iJ(]PW+ѣ+0cx=_WBITt5C]"] p4jt%'{Fk] +,ҕǤFWKiJ(] QWv&jt%kzrוPY*@4tHJhc]WLrKPt"hS4LVO +څ2,uD]gv./O_cG AvZ'ʺ@ˀdQ*7Uj Bdh^͸DӜ!'0ܩNrmѻrEW6%{]Qu%SѢ+u)w] %"]1pI7EWB1w] e,ˠZ `FW I6u% f+N芁cjtŸ ] m~ =芼+Gjt%fJh]WL\u-U+qzt%fJh]15j ("]1'=ZtŴGOB|] sEW3UH6xm䉪65xoQMTijB1OMG;̼Ԡ;#I=O^NBrFO A QYgpDuYܙ>.yհim=NfFødBBIeFzf4gÉHfz] S>A3''52O+(:m I;P+Ԣ+%ҧ%&]L@5ܘi^WB j5t%GW] TɪQb<^DWd^8db`] n ZtŴSX&s7p0I8=\GZt%>vL]/[b`o] S.J(]PWњHW mR+EWBCb2]T+EEWB0w] erEW3A 銁y*FW EWB볟 2e9ʣ銁ܩI GWB樫 yь"]1}t%Sk("X] qQ.] %9''àHwg>^νT^}͛/rlc }ŗ>q]4g.?MN {#¯Q]< XewAڂeR+R@&)J}-˺QŁ$S.^ah?vG?wC(_3u 2Cv {Ưtq.iED>>ay7.`pWbٮ,92wVIe*ۏn❃5ц/7s IJ_sz|OwH]AlĽ؏f#oVj}6~ T3_Oh@Ey"Emiç8 o<Y)Y߰0лz:?:}6[M[պ\.Kqǿm$,ǖ Jp ĺ]Mgtq]H,x==@2x,;WUovu0P7p)9ӓ.JE]Bɫ[S}ˬbKQ+YozP&L:4j^c u*֪Tja>\MtUu|atlb'BC{֕Zɷ*BEf=q2}L9&L2ܜh}k5D *ckds-X6Fk۫N=hS|6n-6-=4ݻ$T,\j6Jsd6CIqiJM#re THf,c7iD34f"bz4*KNI9_==<|V'G@ۘR:Wt8q dJ,h2ѡL2:玥(d ޅKB#ئ{aRw\ߨ>:D!?=@#Uo$wm^^~;d|HE#iQ1'v!g5>|jůIzdU^əJj3kj(%%VƠs uNF)h-i氮mlV s P(PyUv1xX,<[]:A8քFnue+y ,q'Yge0lhLVՓk.keB4h* `{yc֡sV㚞ȡ m "׎]SPP|MA'SX.H1?'؎yokҹZe R5RcM ̆vmE&#I9G!htPk@oJ'm4dLBXCOe$X67NB&Ȉ`PǠA]jr1h!/fs7DRD :8l;w5c0M"<ѳV %v%)5cEXA5D/#+P(\|PS`iyV blGj31z 3$2N@Gjc~,AJ35VEٕ ?AjDzrDm i ovAVe7 W V*A`k ^;eQsôbg)DmzIhΓFCakBo1y(g 3^ o=̈=Xupл!)C^:$b tyB/9v֤D+1T*!:!%, 34 D%S;Hoު7al *'V"4զSkB $\s!z7(XV1^8P cF[TcD1ьB;$OW=us[`rOuHMuc@sZ@[-ܬhZO =֬U 6jP%hg&Db2*R 9Ok3FT赛8|o\ATcUwUY3:mP fxqU9Vv'4zdBjiqf0%#p45[zB'a90+Itk o `=n֘M5bUti" J.YyЬdKAH28ĦiBe. ZP56u4>]wT`]pJ/$^c#_W}}7Vv\UkQ,!e㒚% g^D [>9}Ow%cZF޵F*Cp;}믿/؛Xu vu)W g&..Ǝ_Z~^][Ǎym`AƋn/sgZox7+mK,B4c[zVJۋݗ{߮ǻW<8[|`޵ۺ%c.ާsq7v-9c4Paio3WSSrٞh}2N ; ڋ9اt/4c<5^j$fB%(q@JP8%(q@JP8%(q@JP8%(q@JP8%(q@JP8%(q@JPzy2󋝇@$MZռ[X"\gOn"-Vq/nfzY_WX͛Mpu}^?j`N?h`<^!qoZyܬJ[8 >>x;W;XUh|\18%n7Ox*'uz?I/Tq nFAA-<*zѣ| n凊ʱ!p9FluAE說[U3-Kz#Zdeɋ<8cWKļ]E_ -!"HSNb'c&i4ruxĨх1 YXb,,fa1 YXb,,fa1 YXb,,fa1 YXb,,fa1 YXb,,fa1 YXbY8<}q.J86~N8:XqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'82`O t@C'J"q=G'nH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 |@nG9wk7?/^[MǨw >qf~Q׫#L BI8q|2%K@YKKŵ*|gz\ahY}zJĮ+֥ݐ>!:pc8BO; +U-|t5F (^#XЕc ?d\t5QW-NW%K3jlR v}ιOF] 7e0BWϐRA \t5zu*th:] F ]=CI؝N38.f+528Pzt l8!`ՀkTj}J#]9m-wanG̞-rgn~.]3g.EOvhl^`q63R/9(r>\z{~څbV5?rr=Ty/ -ͻ_$,/ C|qBw޵6r#"~X!6$8vE[kYҨݯz4m-5n#p`ɪ,l:eQa? K]_Y.儮j|G!|^y7LE F _U UpE!Us!Hbp2rș\s1 2B[T+̩WWJ݂sq#X9\sf3 j 9èlR+թcRc%+D+DQW\.E]eEUmK:u*1F9 u 62^b]]e*9TW".H]q\E e]]e*TWELwFH+#\]e*k`:uvUX)+$W4aҋ13 wu +-BOn42|G3v +Ot`&XۋQW1ZTURv oQ]8$,$2dr/]ejOxTѩ!ͻ-^B F{_B^iʬ&TKQ?嶅8p7q9Vp (q^2-l:^/ZB^.CX `y@a"!mq͋:fk0lhY83?nwF7Iw[tp6 ٨% +'Z"Ü˄/dQ۹M &Me@hTBb /I5Ͳbӊ]oJhH9pVT0bK10!+%FZJchs:c,Vwb)=7 o:f?"= 4Ru8uKCGsS /΋ʘDZ_ǿa+_twsGC~wIזǪ-#(Ӥ×U6;+}P> fQpNFZv4fV;p%vӼ=ǎR ]+Ιj 4wüwCi Gu4qZ0WFEL1V=7~՟gy3x4#2h?EMѠDqsuNa89Bm! a/q6c-z3|.COl["8ȕw0hiTh'y꟞u.sƅzܜOi޵}}o,\q<|Ο}ݮt>Š~ iWv}²_Ou | 'Qh8a9.aZ>? |(B5/D ⧟7Tmq_w/=Z-ǝ݇=}~+NSwbk6*W,O@,+FL5BtQXC L(8sJ;;\W<{X\&8 ~EMsr}# `^ g=(%Wp `YꄓB&Ŕ4𨩳zM띙- d4U [`|Vna&/l"Ǥ`D5,qS?ݸn)W8u*n-5D7r70:ٲ+6Y>5jȚ<A"i-a/p q>=k؋`7ALWτWey,(KApBDpNiBm~zJHGL!QA:kvq<ǜ9 eD JJ2mMG Ru"̆xS)($. Q,eDRps *V棶4EKhe6F(sSl{ eJ%G~.w"Oݟ@󭧛ΰt8.:&/}LH큡Ib$@$D4P"ѺK>"JFs8}Bp)Əx/aP"Z |hfڰ*1%OAhô:iCQA$cYFS46M+.hؓ NJ2҅c͕}:;a簿a^@D `*Ɣ(L~{A+)w 7Nt!Aq"F E!M&J ¢?o#.BYA~t=Qƭu _0D&D+$1@#II 3/2k/FI+;|gA]SrŔJ 1Vf7`Tf|+arp3N$OJjN?Ofy~?}Ţ\y(Qx??ք0h,}ύb>Wia+hF \kV\!/7vuM(u~ΒOƕ߫J“}.V9lǪ@hF`n(/f*;Lw]eTN\ƁVγ1̞kzG[e]Iu/ܰ7 ?*paGNCsbՂx$ٳW5heaPNՄ\h9ʕ%r mQ@ݑB; VyvgA}Jg=YT_M.33GO\TTOT_u~g23Z0"?诵1zEF(A~K_Sf@s3i0Pۋ {7A5S0E9j7Aes#y3.>Ԯ~u׿һEkjU{Rvjlzm A1iO,9r&"J @Dg(<8Q^ Z/m^!`G:p䵎2zwG`4,qY!R%@qZs67l.a0y!=j'6 l8*~R%8/;*XsS [Ω;L.\^RE.$W) RjFdjUg*jFViҟEŴFCP"l] R((F= .!J_I˛HhaQz+N\mXЂE*8I)YO@{$r*&"!6m.jMhs!⪔PmU%Y.jTin  C"[!HPG6YnQʛDiҙMDe'9T@Ԍv Bi`Q䝍"yʒ) 0v.4Q1vra=3MCD1/c hI Nk xhvB2vkKh)c)fу3]Lqc9]Ųe@0p89@ G\҉`UpyU ^خ J+/?|r/Yu#.8QH5ֹ ¡iND(C$ 8$F:6lT7FFҘ֒ |P1: FPr$Pre hCĻ:5$=H"JKԊ! Z㥕8 AJ2e+syo@۵su(@Mr!$D6RR@_ ) ]ksȕ+(J6\lTvg6/uiqL AZvR{)J"rS"$ʶ4F{)` )&:()(JkM b zC#X}4J0$]iզ[cR;Ĭ(: lp6HF'g)Oc.*Atrcͼ+ nAb :Нq#{W܌n7W\i#d٥Nm+oVe3.dT6ȿ3HE;%dNRGKGT hX LX’_`% VT` SZGnFD]M3G%eSND!e!x0k5f,`ZFL&eǧ4!-ɝn>=R9e+W}([kč`EㅆI#9 SGZJ;Cp@uL&2a <* ?dV>$C pC9] ˽)[gD g@biv41>e6mm~]*ɑm[b2}wl|ۓ˜מpU-s;tia%+++E5ɨa% .`&49bq$#7@X)oykBh-R"|Gm7 ,3r3c68{xeUqaq$鲳q.ܓ WһI2kNAwʵ;iPxz3Ug؊1"=":E!7p ڨK8H(EXHU`h`R42 x1DTHЁ-h/#vH%c@-) Tbr`g7cįFDZ36X{`xCY$|ss*c**0p0@h9B'4+F.qVq@C3Т R >r(A͇ROx.m|ˈ$3#F6UC0M!X* =$! Y C$FHHqb4%gN h҄Ij$fX;%=_x*uN{FɱH3"xqŭU"G DĈ3f{P@-C4"J)A /υceCvV)46߫ZUF S0A"EDޏ\a++sf7"GI[*Mɐv{),;4N&tzD-uz/Ť`pQz/;8w{-SYuuDR}ɓpXxi< v?nS$P.9a13F2NE^ΏnP۔^F}Xfysߜ'^F1H& x4AMA:#OGS6Me;Ty10I!v0SC`0_d`*2pJ:1FƊs¹aVECyOlf)/OI]twZlM--+#gVZcq/i̊Ș *J|$4pl'LgK?PO' ܘǥ؈`aJRb]ˍP^h2L s˭#/r`\|`Ӹnk(hq}b'<i`jH(eK-5J) Rhe0%Q >߬?4g3zW蘳9$ uP[a@frxme5u8sa,H "b)yr嚐HY EB9HЁ\r};ixo4-pSLo/<äjvtFtu(*EǸ.?O"MܺFkDu]\P ){_|?f$3e3PRkN$T;Ғ],2~MwA HN9 DqUi@IIDx/- ׄ*[Q-jJhqBK33wk|jp>D?p+dvYEVNjw֨sø.kt.^U۷XwkڴobCaUku+^' 8[x_|_fEi pfq1Z^h~n@*nLҾIS1EѼv84hƵAwu|/qwrwV.n=hd7^_`q0c,I-K滇kƇ^<s;f?xF Ϸ.䞽4Fޤ4߾S/0G\c*.O U+o jnNwfj=9ݛ.leͺtĐeAGvAm&)Lj|򠥚>c'ʚ@صveݕi{nrQWҞna͵<-?oL& f@^ke-e` 0GM,+&9NxQ>8$=g>No~ #{&;.O㢙`2 (~d>Nsݧ̅ssϮ1bY[FH(򥖜N)bxЄ>(6ӇSi/ X _{Tw tSHD1T)J逰  <"fX2ԚPӆe$u)DlW RJo$%0]Zj]H`t]usL޻&6G :X'%9hb Nq9IH"hu`)#9,U>9CLe(:o;cƌL"^ˈiDk45 i잕 N_-N1:.W)mDZjEF+irIR J,~[/ Wޭl. /@,:[FE5{[rJFeTqmZԡx83>| q\I=%u(V iD"'Ī:Wwp~хoů im[z煉(`LUs>> ԖZtc"{>do'98QXFIkZ$-a(jvG[ =w7ɀ&bGWWZ{Z@[sWL#,8ZI`Ff{j.k#uI"這ΗwΟMu{>I QiD=8_ʸKzVVUc9 !ʈƒ xєJQb'% 2 4ʳz$FWl1!80vƞm(n)=ȃA `%R8A1Vvb}-/ ATBZN +'ଗp4 3*%2ƼYm{ȁ^ ]qɥ=+AEW n*gOWRc2 +!`}ڻGJp ]%L;]% tJ*` +LzCW .}Vs+@ ]DRBѵ5s)}q>岶O>^߭YBk+hje$@ D? S^i7qkx]`tAoTL]}{#tzC 8{PjB~4,ѫO5/x;rqfBt1o+xSL'ᨚ@ `LW׉K ڼ.Ok IXK6x9h \ˏzWH+MULIԛ3ղ/'=J>ؼ Ҕ7X০'Оc]R+u]X H{DWXSJpzw0;]%kNe]":z*e/tʳ&޵q$e qH Yqp@b3E2|XV߯z)RrdqqEGU_Tw\@s8+E*~%[c|{Ǜv%+E#I=QpjM rN\JQV1W#*g0V]%5+@+ɳ3W/\ILhӗA,oJpqk|=d3W/\),g-2WXĮ\sʓgWRSԙhDZ`rKhuz'ͪHq{凜`v(sٍd%Z@rFϵ,wMG>Za{1Xl=]p6z}k4F@-euDD"#%לb h8bc[`"ikO 3'12]t?^qp3N<zG̕~ҝ:t1L+56k\%\%  \@sE(e\%\%J\ZpJ(OcdgTR[d0c5*-*'o%ǝ3c69 01W m1W >ygP:sgDP"sksp%m1W SotJ(YǮ^ JZd`E[c\gJ,N\%t+Iq[h1 ;0೩͓X\Y\>6>\Lqn3'q/gWL?ȑby3dmbp(oKc6Fa'=s f< cd2`L[ n荞 !q}W*Ps9p0I43"bFcU &wX)b 4u.057̓ȜV|~RfBS;ˊN T,+M:pL ڦGXkW&cж`^N(oJ/s^A(Mb6X/|pu"?Vp7?a@`!sJ?˒%p rߊWBM)Pe-_x3hG) ws]Ý0hUy jU1[ 'ow_Q(yuѽRJ_pq-DžaW+ wgmÇgLURD(,fxYoKsV L܇Y0_@o ZG&l3f4j|sWL# 0mJSyC.k5r8K~}-|X)Len\'[h績InbRgWa-๴ ̺OU,|:}1½C_/Oո>iazcS߽WA1K=EUn=<+ZTYÍ'W?OFTm]/Ӆ0V~cZ]5a7IMxT~1[4-[]~5vaw@KR%Zfo? JA߼Ҽ&*ϪМzs{ҳ7cп(8> ~}gFĄL>eM W˓^STMd.|c"{-MIp\h Z>0^d_8҃ԺX/AVk" >h)}UH儽p /Ĭqwihj^n[W|_5eDjsM9vR`,to{${TQF'{vr9]0$k1!80vƞm(n)=`g E$PEdB`Tjd!R`mDD bEV 1 lԯ{άqt6rC^ݛIlyaSA/1+ fYھ(L\\Ϭ漻N?nAžA.n{ؙ@H>mS$սH_@& '^J/-ʉ@48O5 3Pfe)(`1VYE 62^glrD`0PiXsY߀&EW9{^OI D0*LxW&kk-dA mǧ(;γ|&R4MyWͬF^~;֣! nͽu.$AEa{Oo!uAߒO:~翩1ԞXA.uRB4[͹ARx(1j>y]ԍH$Ki;\J`j}KOHM<|i>Za*Ҍt*!ufR) ~JȌ8h9T[)Xr%!pרp~- 0g3Shªq(IhXAX)F8OvrJʂ4 h&*YTĄA[Bg+W,$I 7׵Ni<}a8.?gV b8X! *㬦;.V$*# I%kT\tocsٔ2z7zi [c(x!fH x6HF7ĵHyslx-jUm&w} ~uFKMdmǴi,k' 4Ŵ p]q&^L+;r d1GdЎW=W]O.EEKdR|"Y y EU0T3dhlR;81NK zv:h MGCJ9ōt[B7zf!%М s meO(G37ڳ؍w} y..v&_Y@= vNb:L|*쁢%D=# `Lrɵ:7 a#nD6$'Dxe,a>-jb#*62JʆƔ0ke9=} PǮ1ʸnĖ"a:)^YHM,vy(qk D|!gq_ǰ&W~!Y%mָOdSwEWR (E60:[P M([i`ϋ)bܢ%m Sb A;ٛY+pR 0E7ERP<\uoiF:_ݻ(#:mqkrbps^K+QD;!c,Ϋib<)?a?ISED1o`p/F$xj%ךFps Tn4`"Y~7$~)cikux^zi )f~_/~$}|S9)΁2\&^qٯ$/5 DfA :0| *\ؾ\S<&o|m*~ڭ# 'oZ"-҈O uڋ]D`p\9Jy 1K3 $\}u/qЉ/ "DsoAPEhD`9Ly,xz"`Vk͍aJdVzhXsntJ%:tϓO EǞ$=gS'Z@+:cOzLwɋ9D ~&rlhsqX) \rmP&qxw[a8J:nY:()x&JkM bJ##X_k}4J0$]i4ibulU3y@A:beѾI9>b~AR-+T8̫li8_+:zCW2rQ/J@CENd9sX;ǑrRke'sSE܌@a,g1h7ȉR4g VYeMMP7il-k7sbdQ/{0t!"zETDZ cZ4P1?8M(r][o;+y阗-`XbaA@I["H}-ZlfeNݢu &rΐ5BEFbm̢27&nn@:G͌4rGV2ovٙ|_<3ө@G{/DGo=yȰ7{)gk6ᅤ FP d⡱LReE6A hh3R썠AZI<qAڮs!&M2rDA\25$ga,EKGqBPRhόX RD)TMFatrZ;fw})8%9-8ߵ{|ǃ>{0z^|DZ%\4 gIAWzGhR1oYooP{lگ0݃s{V콓5mo^0VOU"K*èE帆+k-M`.4@Vby6 duҳzqT;gxtT1 FDLFXhÒ0EF  A&n!^[ofms[^sy}ghLj}Wћ/db*eyE9$dq-9EZN='tL9?aN8Έ'fq6[s9phu 㦰dK|!o,_%*mwN e͗_>hƎėQaV2M.1_D r[2b 1fR0%&< )2*_ P%"MP,!KL>h3N "y+$82I(W/Wgౡo'Ė\~Ig!eYk:~~{QR:KpG(J(TV y@V$K8蔶`鬺7W#sȻ9,=ѥS;:o3>gYWދr͍8N;*K˾QfDM~J6%V Af(rlPNaO93.wN8BKnW^c>kӘh챔 "Xa3fB̒D>ȍBuuU~5&dș;d A A`*8An3VI :Irt;56MXkkvUxu[y3w; SU&Zs}{ w_DUjI 2`BH<͢k&vXhSqh(K@aJ M' HWB"st1 q┴2,w)\>{!qYd.s>'>MG89CZ)3,Ƀyc3. 8 *(.)=DX=XMThu]оHu*qNE oLٓX-oXL:?\LZ1K)Z!lY(w1Ü=8~W ކ_8lK.⒋Fޟ_M;=戋9hKR鷶.66ƾ得vWk)d !/>ㆷz~{WvSҍ*-dyſqJrh -ݖ07К.wbi6W nnߞsTQ(DpId1h% x5R"\y P#4Rs;N7@)UWk;^߄ϯFʏvLy {+"GhE㍒I2&ʑT'9Z'3<3%JidIe+gUbmL%hdǤfvZ^|ڒ2KMWӓMZ%5M w*nzn; _UlUT#OV:0 LCI/`Ii[ȕsR; EQF A6"%> ؘ%= -ZebvADIQs-= |&#R kiƁXص^rpO,w3*Ň{:gf5\?&7lx}dL{ u4Y(F%t\2Uh2MC!{> &CRؔDpdv̖ 6EgǼ֐8Kd %`#[{/-m"1mqR pY3!tdI 0(Δ&M.@8[F.2.{\ܦL+:@C9S ՞!LfBUqx*xXM;Cp*wc7-~Xry(4(\%wuzBУ͊Y0},!mYzvA(>L5&fbp5n-n5|xPu7{-[zlɋo<98BPʨ& *c&O>Ӂy,Vmܷ)u]\a^>|Fx9_۞&5x>?)ds}b U%g](IY2&;0%`=".wjC / wyZ :Fd$%e%9g[JY6{7tdb`=w6'2Wz=#K/ȃۭ[4bw5d) < @GL6+0k46ecRkfhS2)OhX쁧LCF!܈Q<pU^J6A UP%$r S)YSA셑կz2KҦY3 Ry[kiy I@oX̶_VI,?E.ņ5Qp(.dfvð[ed/񲀻AoAtU:;e|n3XTATAd$a"=? M3r3z1I sh77F`^RDT1]rx~qL2^&teK]@QcU5>o ޅsL:]^4j@ Pb]?gWf}2N sY/β ;R\H)9D"qi .>o-@ wh`k/ OO)"G?`Oiե26֎nE٬ͻY|eupn 3֧;HkL+-NL^' ]0ɸ {TF+x}J(mO>]owOثq p>~-/hظ_K{p5mqvvǑF֏/gL?|Bwx0\1ࢻ?<#$ nl)H?$W4ۀ|ryYfԚ>jE"؏v֐GBUmRث+\tz U!Ḷ̌#Q%a[OT[jyj7"uS We=R noFc~cBMz腒=/odݩX!EqǣV ;u9`*b!ڜrH" !ǀc`DqrhР"{.8g'i#^M G?v[*F$Z+N8O"J .fBY*ͬFat[+d/WӍ^]xXSr80~ c`,2~Bc< @kh\_'?H|$-PDON]^e񫟗ÒO:+ =TzƮ6*Gȃ ˇم$^]/T]] =Ά A{nr Nh-cv"h) sHQDCoc =^$\O\كػV60/DaaW0˗|LSLϴ}֛jQJf> E8dD Z+JNW=YClub/O6`ƱM:;Ri2-/ N(kdxhmI g]K^:H(:<L}̒j}l^Z ݃T1I2LS[ngSi$+Ožc~`$W&1Jn, =m 8W]oMFFZ}t65?S&N]NO'a8XOuc9Oj!7zh/W{ѤtVGx* X+$0$gIBٓ`}]8 NKh54' BEdAQk|d^77ɂ TkՎ" Tx6L"8OD@b,5o:zM'6旓%nZ3 ?-l}'>vs4Qi}mAL ^ALWra4'- % ?+6&AR(y4ʙrBMuO!Շh]^f!:*+@x.Qګ7)DЃQG R`|B)%K+^i.q<z˲zBuٗQ(] IJ|"o$!j]R,M):%eTB+$b|f5X|NJ K+ { 6,(:OK&:%}(U ) q9Zo!v?n|UdAin`y?n,b .F9s6ɬzNJE"cd>Ǐ_ZE(`Tj/T)B)"[_vJZ@GJPmzs5_wl(7C%NUܫMǪf>-oxg bj0 uX)F9~/q^2bӈڻ.-md?ߔlb}=/^?YNfPzioYok|eh~vX~_uE?_SXqz'zd0eٛ'sa7PF|B/d2ngx984Zh9䍣H'ϵE5N]۬'h^7oV#[^艭LEuG/l/χ;ܵ`E~ѿ 9 ]@5d,cMrE-j.x}k Hs5cr_SAom XbƱCGO Y;{fuĄ|SEKr⇵Vc+͑xDV՟?(y}k@DAH@skF@#dfV!z|8A ˽%hJn+u49' h +:xdC`1%bTѠ Eô#28 H(ɤA9E,F6 zfl $ 2Sx SG'pNAtKEX`c< johx9CΜfHFтƫnLH]Թ+[JJ@oшЅuWT{!-\11"% D\U! y4?S}F;5PX1 RX8< YNo:9uQ'N*{gqs?(jw2_Rε/o| Mfw&R)!-Ri20M>|L*:Dg8D\XWsdhGTkjӗ?OlPaFj<-5?REkP 5cG{rqz)#愦D XrƅSDD4 @c5TNZEہ%DXPq`9DY D[$4 @#g"Sg}9?]~:LrS޲o{^_J vK14z#s Ór$p5qnZ(Zo@u\w`&8-g.VRxO #đNiT-:Ch9}[ҰJ43\>2-X*\PE+7~6`W<*c-q$v6?{WƑ@_.x~`:YX78'asFЯTHʲ78ڔIfOuwUUBkK4*5Lx' 7ߜwn=c,G]u+/p;x`tI|6e}Q߫ySKe>ZnNtWz2:%I׳K̟gE\iVT-rYWL*09e0W__LFSimu!Þ%̷+lO6,ۭ⼸G|d>S-!_9ݛ*t^0W* :_N.C1# )cytwxJ`i^U+L|iIJ")aԺY/ɪ1OxDyDdK޲#翀AX͠*;ACE&HT*Ji :-ruګ9}('3,՛ٝ&=m[.j8Sr.;$Wlq5%ɮOs/gx`pKDߝ: W{ZY:v$SZLFҦc,Ml:q _C=_6.uj]V7w9ntfsp`7fZв{^ϋ;uyf~knv5/܅y^tKFc?q+Uf3iW}Kq]hf>#640ǃ/tX\~\ Xbd&rɓɂг&*1 ɂ;.=>E0`90Lmy(fImLi͵/u{Du y＀CAq^@ J0F*bu0H2wTӍg=9 [.`/>s:,JVd^:A2PN9?Ie-;d-IdTҰt.`&Tkʃgub8AҵPo2Rr+2 r EkQ$ brK%bdF/cL{#}t/ Za᮵wX >nfܒzlifiW J3oS3GlŘAxDdQH1\5j2s"TVTlŊFӐ= JlR!`K`! f1hQ`Kwَv9C'R/Jgy"*P.;a13钣UYEV?g3M:w.hc8LgMW,Vޟr_'Ϗ /#(4Fr(QjWr~1)YB$5ࢩ>kՅAzQҘs`O7- (<)e:dqSO)/|*64Ϥs6r/,\]Ͼh#N9quorEm.׉ǣ`z3U[rygV'/ؼ=/~Zׯ[:K}8/⮎)ɓ\B7/?ZҴC|1Kj׺_=oA/'=p4R|AgJ;f_0# |IvOsj;UXmՍ_ӭ_ހb5 6Ll=P Wl)quUd<҆'YZ73IzYu>ۼ3h oc=5_th>ut`GMD>wu3sqЖnc! A̙*h[0f:uYIVAN }<{9X'}~=<'N#3M4 Z#]0R@bIxЏ~@/q4n%Foӧke-e` 0GM,GX5u[}z$y|GL 6)G 0B & x`p V'`VL ȷAgiԔL,P b0k9q4&8/*XRusex ,*f0eYFSWbePr/j{@v"7._KgR=*R߆ɶe%EF`RGTRJmɌ6>hw1Y|ceK0ܯ%yfGҊus1:k$J*ֻS`;ąsc̱r6\K "_jY"MGWWAl_/)移~[]K )[cR;N5a#A2rVk/;^ Xe܍Wrݩñ?\b>SK#lu&VaT̠],ҭiQ5Lr9m髵qh|;j9Ig{TM)Þs'6sZ:YޱM6nG#3)S27zxc?.wˢErnk9!G Yq#\aꔁQf+J{OV_ێ'5oE8:#3PH3V|^WL‰ _j 0PݓWl!Œs.~ncYTW,yژ:I %3w7bƓO#Xv ))$<\g|8j/68*\rE:ɔ D4ZY,er߳sr^iv'/kd$5ۗM'ixnMjm lx'O㩓Vu^_ʛkQu~^>CnfEp3M+=߾owem$I~NyD^~]`veKb"$% -2H,KdU7^iTk M>}j}ݍWǣ{—:sd}5y;UGϋ *L0NG2I36Vk`Hy5L]]M9'ݱ*?Il}=<8pp(rHkr:#AK.=IںXߪ?7*m/f?@;zvQA_F]Wli%t.C Wh*A2MJX'S@lKSd.8NsrLOgeB )U +kp5qkVqI;w޻Xz mhְݦhb{60+;G9\#H p"a l"qЗWEڵ,3+,cwWz]="W$nBHқHiYWwV_"ck%IHBWZH.wE;.v9H HuWR r|WE\u1HkHJ:v.JsG?]m wIYf+zj(wME1dL o,Dly߸dcb\K~`ƟGgs:$s[M7L[˭p`|D$P$0.nND"C0`Wne5KIYa(O{o{K=If<ƿ18y~魶d1Nk4O _;Rfsؤ7Y)_/NӶ)[Ů5o,IAO?>+U?;'ڋ?ܧq3׷ ۑka ( lAK?d#dow]ogǝ(('XF3P2idL :X2U?R\H0H)78 Y X;km{CH8d=߸8? A2K0N?K}PZֳ"K|$O+@EkQzNScE 6C 4pB;-Pxc)&!W;cς&'Rf+Y9F Lʹ dc$oKL`XRd&DW@.'@?o"99bٿ67W߮ܟWC芫SdB~26J!w9& OK8蔶J dC`RfO C7#"EB+#N8Vd5q+u Jf~e.ڱ)sN=^;@Y~I "Mɺ\ktvZ6KkgtٮR8i/(kJɘ"")&aDIYj{%_O;^ofkَ6j[&/]mq2&%7V:лm/bge⠶niYL&#˂e#*5-HΎkm c2Z\&&yRl`E2HY`Fup5'xeYJD0Ek7%azaLߎu=|jX$<]{o7?>ӯ(6ǎӗ>5Il"c`eHw4Iw\`" Y} eH,BHoIR%PΐuqEfIH }輤rKf-OnzuO_{rRe=TT 7mj|x d$Vwp F탛c ߈Zan4 |eeUWVMh5az荆pKЁyu`O<֘NfV/"g3A Ny@tiRُt#@Qg6DX3/Oӂg>->! utӍV67=isŅu]W蒿2Gye/Pb)gLF"7VKci-gzga^EmkξMD|%hZ+Ȟ=mtIJ;JFKyg5y51J0 @p>4Gи`4tddz6>'R?CQX)RBHhWU(9=:ޜ3&u4JUJZea.} bt&s#I$rS5q${KލpB)9(c%yuh se i8+A[FXa|76޺}ɖʎg3cd ? ۯHm6AػdT Ž}*S[AOph˫wWcwM642BlSM1ޒOLC^YRl]_"3~4IeaYq,@Akj2!2 !~F[ R߈(ݧ_x5Ng12X/nF[J2^_H QX=a?!ys֖1iD_;K=]Q[:blS`EiK>!x:{;ZZ -cl[~=\ayFH1x.MN^;䎨<Erhnwb?u9K6ӿŶ*n98AAᅴ ȆlOμ: iGya<9-V|̚'@>S g=pypQDz]}˥šv ܑ=o&:UCӿL5ۣ~V& {0' b:iAAX>,o\G&-OG{8- S"'$QD%d04 Eâ%4Rs9d9DK>waMGB 1xvF$a9$JHL*튓XH:ind!%-H4j:ѣE9:SqQVEbV<ӊLɂ@ 1$tL $ mO޵6$B_w [dw `яjβSMRmYl;#bwUuUuկ '"h| ":T0Cžθc_}(:ևb?}xazQ49Ѽ:͊tF/ $Iо4"RK/urv!0IGjLz$}Kz\~4R~n[%Q<-G\^/@8&ՐBk:@Ddm%q0&hզ -c޼cfύ6}77(lY2A;P   o3bαlҠgG{nvD_h6g"s q}8-w4jټi8L{8آI%0"=2wL<ĤGO]F`BP\b1!IX}޳!::D  (~c%cS2-P`k`ۘPہ#1j&$ g(́!sX\_&38v6xp*Ftv$-')9|=3iJvXjORZX\T)‘ nTѕU[9W˾;K߽o k񇪤ȆŸWiBMB[7k7OzW~k+K :+PE=Ys bB-5~>MiuzխVs>pkN ƺԫ3ٲ뿼K/5s_;a'l O Gi"!my/y2p%P^[21q`-;ޣn~Yzt+c >thX.{3i[zm~\L_&-c}z+={g}wЪ}OzBO]V[ˍ8+wkm*-ݽ@w6Gp9Jtvzcɗ.rUt:+RAkSF2;/۟Ʀ{=@ ()(Yu"7B{,*jV80zAљ* AF7iz>oxwŧ'htOI?#"ArP)Wbj/@_%SCQG& (&%SR**ku3fR.&׌I'*uWyXY@g"^;Z:|DO8}3ݯ<,vY.寮]9kY=Տ61RV 4J- h&v?^1^V^^]ڏZ" {9~3$~DPi6AC{gd`hF9lHP:5d&F (c$Ę 8:*Vv6)Zn9;:~+m#{Ę+A82Ȍ V 4{Ba_Cs45+[nɈ!ALkS0-NPTX!CnC5J8P\f B= +C!T~𼼰:J%(;BƔ 5 3u181fniAq"F EM4MV9E9E#ani0cT1'5!ׁx:t$)IapPl/x`R^@۳kYPg0%P?c|ǶO Ӧf_UܫB2".oFBsUg\W#86+},'9/(_ZH0oRoˈW.op+e,Js L~槭!Tv-A Ί8rb9;rb?9+X[W bN']&ɑyQP^F 儝\s8<.dmƶl-k'Pw`=uU^@| 'r~iXɢb >㻩ˠḂ\UTwTau=;yj| MV!'Tj4j%hk3kt9*lMBix35ܽ9j,8{p][H7?_q52q&̌i~Yw?V`C\@D'sR+*]GoPo{=DRYTB NZ D1HyB['"* ^г{AK㦬}z^̈́S%u#8P\f۪*}C`s*;=+κ&{&X<% 椫5GĂ-ÐO"z. QBG2]JR((F= 2 !@ڗ̂=iyMwbJrTmTpRO#(| UL 'd4*+x89ܧvdUMR﹓˼v8 PCPk@fJ_v씾Q 8ܛLk1{25Z{gh{ԕ{XL}wJ֨kpߎ{%BG䄓cQWH-%WWJ=4{JH- ;"u c]er8u*2B +ԕ}ѨL<*StURA]Aug+$Q*ˏx*S蠮ޠJIyLL= Ukm={2lJ>dEueU׵;.b3s`=+Y;m%WGLDI~Fx)~locghIDAVs&2jS>[ v4Z5:t-loQK֪ q]-{1w觿Rr4:NeOd~ .|:TOe^yP?|aޮRK~g#g7Y= ֤x/jSkJNǰOX=$r20HĖ^z[&&y )ISZR %\̮)܋#zmVk4x j)` s,NϼVsC9rH:m=L=S=t }h2|HBx= q ѨԋG^\]=ܗ.~Щ㨔= G+>}1L1G\MɱLKŋGIɠގbFqbH]eJ]HoUVȾL[TW?{WG\ 1qZʏL sX`<[{OXԒwlR-H%5 GefY "`OpCZ ]-y{t5QxBWഞ+eBW-DYܚ֮{EWe-t.Pұh ]!"j N䏝&ʠsh͚֮8Z ]E[Z'GPgDWb=:&o,9ya4<ˋ= {߼;G7Wgz2(bz~]?}6.]܏AZaIhCՆMljeFzNyE 8gUpɬfGD{&Jf @zzaQ|^~pwʜPʽptE{)]}[gDhEtjje^ ]mz*uc(]}r6@a-tc3+b"`pi5j G_ Nt0GԾS NOFwtE㔮!]g;5'N Wd-t^=ҳ3@&pX]&BW@gHWB ܰ+'J9UhM"jj QWmJNW@iI]=K<:{ sN}Ew͎aT,{5F\\co =Dϭ0 ZdB.ȆZÅ}{fEh&\rk)&$^eZq 9~ "j/x^h~(ݑAWtwC?˲# c~r}nY8 +]̔ _1goytwq) uߴv 0/8Գ8b;}l+6)M`kb Qo]O7w|ԱG|]Uk$W7ow?6hjsvRnd[IvQILIk} lؓ )rp뫮[J.dc"]hsf;\Jsy7<lrbl\lλaua5>LC@%eng)Veꭢ͙!n-hVzϽ#B]4Dzo%_Z`$KԭC3Zah6ωK"Ztm)y77. bi ڊAkPs; [JPn)wcFJ-pϭ?"c2cs"1א7T7F+PtdR5=QK&H0_DE0S"ޭmɶ ڧ⌱YAhx7'M!:Ζ20T`&?ABv 4fƈ2jI 31$D!Xn@#L$7/yl\Qu*L%sP{'"$ysZ1 f 3f0f$7-[7@x1ɇ5YݜBJ)=38:Ck0}ƀjڄҒ$>$XJ!@i~8BJc{hqBiВiG2Mr}H Zb"n%A`*.t-a6t&o&FCVj:xXUxyuaAR:ldF)fjPѮ"j d{,U>]K`q^brHupwCZcΖ U iwv*ʄg.AGM hQAQiJZ27HQc]F2Yk}Ԡ g=m +ZȸCLAA-GP C{HB(!24 aИ}T[cԭ8A.ݺ8|8C7a:?%JJ4*)HԂdT f,T l 34l~a+AtbE,l %RI9n,A{%d@BYgoTP󼭮BA*1q{D5}PQRA}k yncK2:ABP-R8(;$ Y,]Hib]r LJ#.1Buf 0Wsr 4٠fŘAQRSt8!a%`/c;Lof.kn۳~w`*f=fu[w D m3bka&a=C /-~tlsU2G$]͡=YU&:cxꘆ'$ `YP|A.Ao%CJ$JEU2)G#`fK`I0F偹Q &(W\d6{n}[!qO/Q:^XTurP% 9աkt=!V1tr@vTVB]+!H b!җa~x_];'."On2K,Ƒ`̾ ]{ #Cˈ|ui%m^ /Z@mDe KR@FH H٘ j `كf .5IΨh Bڱ!," R(V X-&>f2z ~ hB/D(&EVB5v/`mB$12YS\(- ɀ!jPk M@ED]C כ_Ńʰ&`E2 ۖ55O֤V9'h+Ytg&AHYP*QڀJ5筷f:x$T"XHD-w;$a3s7*$h,A ̼1ĭ7 9 V0>Gtd7\K@׾d=E?Tm\>@7fG] cWQsU~FwW<"PR nqq/n x״VMrHٴhY6 j-gQ{GyfpHAyLdQP۫v~Wso>Nq[IqwW 'wk>?;˿6ocs1̓=O'~y}3y{}u˞&4^Gknί~u6(ӓ_٤/mnn?oBL/ mܶds[ {@~؁GaS Ȭ ܪ@+a5N03(@:(;R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u=_'PL598z@ކWZ/GBNg R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u=['WL XL 7@@?v'DIV@ KA@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nb!^u`N \qGbI @x}&: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u99>Z =^ܟO~bn5n7mhw]?K{ m/0I;HcyM#ִ,)N"}K%ldI.YU N"OC3A ՝|,:2e@Jihivbdg >ɸMz)np&E &;*+;îF@JJdWo8u@`cDgM \жH) •D !8GDt@\F;W -WtW KIM* lHgJ尴epaor0bڳu`OjvR[+EސoeGV&F]T2WW*~&  x,l_x_f0,E@"X^Qs#r*>Е[<QP܎Q̽$;a_dtS|L0A6~Ww#P˕km\̆~#hZmhJ9q d} g\2 )Y)U TNt3@7߾cjqD"k)F 2&<DX9Y\NfH0yWC//puXa W|DaRm"JpuO6 W ݁,]+VUR \Qʅ\| \eqî@Zy*KIiWor%cri m,d=\A⌽K &v3pRWYJHTWoDLiwwWc#tJr \eq \eiy \).^rCpU+pr vR2|5+5Z,` 7 W{D進+J(j1%|z /Y-O拡߄2?":`yDsqMGwqD`'2 #ƘC <6O9W/X+i,|2X=BAZ8Γ Q^HeʞVef&(As(Hg8 G^kjE49KY `Q)E30~NxY'qUVqa}{yyyy]˭OԖUS7f~}WZ[a}~L8ԈGːc"I0U,SySӿA(ނP?`P/lLjPʴI(Y[|8 h%"Eu/7-,r Fe;˜[? ":qlCq2ۅx%|ٶ_υکw~6DKe$z/+o&}x p6$Fyî]WdZ#>bL|ڢ Fyư#c99'v~#Lk,N<. J&zY%W9W P|ΰDoJ#3?,;ߴVn<<(%T\f)*nrUպͺ8_Ńo( :qog ~\\r^·2Cc0}ޯ:itJj`  ͇9G ܻ&&)#ũǖ[N'm&0半JsِtH\[&.kPޖܮ"X ubl~{\N'h)Vdtft:`0˙f:n DE8DdIR`DdV+lFo6vǾO#>\mڹdi# LyXphۈvHqJb`D)I/-,Ø72&gƀNY y[$S#9O9p 4GlL9" 4zmbxx |;*^ka51oVM>NC/ BM-KUm8) mFpvo(thjzI5bb ,ibA*'+*/=MGv`\KM|~ozʍ!*N x42/AmO@OC@9m7y2zT%K4ibL9:Pʜ Ӝ,ۅXEZoƞ20&R7H@yZY[D?K==~N?/a\]Rڑܺ!hW_w`\B`t`)lI! Óry̭4Q41!5%?ihaͺFws`du_dn//woo6_SM[%N,ۮ]9]  q)̍YZKh~¤Uy`QNSNuGŀ5M`DYX@yU(k̥D,:' b4 S*"Rʸ Ê$x wɘ9֢96>sM_FhE;6Ӌ&O/ԞہX(,NЎ׋n S9'dֈs,"'+*FhvG7zD$VcW, [E<)$%'%0^Y˴&cb"&$;c:OZLj:hLc@J"a+ +{Wpx[".6^ZAcώv#u:vDh8]d# 8|47M*a7Q;䬃Kg9sWD2%+!XL3̂(4&gsK28 BIS=y97l$1 1|%栙g LjǚCtE$w&}ϙP?as8H\yZɭ0)+[­U֪z֣^*]qj?qy>}6`$!̓E; a&{4n%lQ򽚛n {nOn\ Q9( xއ*hڢ3+|>h\gTR6 1rK|RD O-b I+4a!Q}t)E+0Wc4tiFnb{Co}P{Px \1"h|D~= Sɘ։qd'A˥9!rJ97[ 2$g6d!Z* N';8c`;ojnyHSu4m(N-* " <;80ΉAsa>ItRB(XAA#X*O33-nB=#׆<ۅ(!=aW)B +u0IJTSEQPD>Ic׎_W'zgFz*`S;pFČ'j)hypQ;q+g}s!qd}H-j sBDy`l/5yV/|?F(8 4re'Ny@>8c{%a8خ)15`ypvEGi8ovz( b2|Xp]XO!0 sqՅ ØEa1$닚LL~%yO.jU7v`ݧ q|V]Sj]kYp6x;5;z|sWuOA7A0{eHҚ* ޯÆJF8va8úM]0vu1tK#oy1)>^뚊X7RlxﲷiY͛ALKjKĦKʙ[Ɣݑj0YW5V S.6/_8bƒqOJmXM)2#=) d%YMRp<-^ϯVo ]xD>{GqgZd2oZQ@#q gfyv2n<\O,S aJC갻C0E#wg|Xt_Rlx_v,xbxp&RϫqʠLo5fRM"#yGC+þl`ivӥbg!qyC@^y?`DDD;ARL.DKLOHyK@2גF-b_PCh 89 ~pt`|"aR 5Di`*RLHvbDpkF`Lg̀^@<./DZЍt7sa'JS}cE>@9쮫ya뱪6I5gA !n׷~B&12殑 ?cO"G8+ch}im6}* |u2r\ucҽ n(G1[(}x_|WFF4ׄc\ZRS ddFX0g6g^gա =ގ$Wk7bILw0K2m7acLb'7 {mQs (.9瞣l:j" Jr^Hg8 G^kjE4[6/dmHO #{u03F_rIV艹!)JE\Z k~P˳ lK8IW>L6x9}-}7uYC1ܵ?v4|i_8p0O8,2'lûgݫ%8GLnnl~Lo&[E˪;6f\9/j# ԎV&TSTBe w4ya>^c$#5AذL"ՓDőosFR Ðc~OYV 0:8hrZ}!{H(N\9b$ =YXf1eHчREf)9\}IO@rC5q*99-̕;#wҜ$'V$>bl%QM#eW#tJyۼz>;j*ó8vty,wEM\>I,o2DBkA"[9?Icl=Lb|-[S9TcxKC;؃[wAD'xIuOѼ~goT(]G_K1F+)Z!|/mr/3b6:߿G{ O#*wYa+m^l .fnM2cm١+Wwt DoIs 7QUbe;7*ު[V֐[oWR㞐(5W5aY2Ɉ\< oV;b5 L?ԑ|}η'רh[ya6>r?)Dtizw/wn5\ҷԏlFVN}S&Zt}~OgQ17370}"mɹwp~21[bJdͽǡ[vUX)J+xlʒ 6Z ) dsV0n\+ j#c5qv#c=R kiƞXg ,ܷtd. 3Rf|bzqߝxz3_8b[1OLblk o4H@*M 4jm0sDUZeU[Z$ AHQ`S 8 %1P-&E.1ʈ]M݈b jWӎ}QWFmޣv`x/J8KZBl3&'g&-p ˍ9hrY1XyXvo&/:Jš  G(ubj+a5qvaԯ\`<D"Gx&G t@sv+tHL<8%y&X+qK]Dt3f@oZI.( ΄Uhphc$OZ("DUo" u \2)YMKEYe=.>hru$+#h2gZxv"A1>8 tAP6&rCa5UeTI.otngmXkG\k;7eӪי^ň>;>~7%{|}2'oӤwXiW-G>IJnU` V2$In0Vvҁ;A6Ͱ/|2 >h[ C"J+Qcfө-^,RY%)D@A!%#*L|Ue2Vh ȅ`IJ"II$k!99jT9)k4="))EQlhJ7m{yi GqCO+O/Ϸ ˝;ktַ|R9j(%qB(2TYd>H2H20A[Hh*/uy}wo^-3:VҤ(H e !:E~N&rII`#uPF*Ta8.jxc}R!k"dNznI]9ԙ–d-a zի$R^JE2 (hx#;_Iog*w- S i᎕ކPEkYHgcR4H,g,RڇOoʷ\:` 4}*%3Kh qrR4t*K<$Šii^1\Ʃ I_%.ӛ 3ΐ{Oyo >+'_<YJƸD64l?__5?"?$Z}pF~2$SD[rJ؍ θd슝]xs| ޟ5cKO};rt| #QeJɢU n.փOdFE|;Ug*;>ŋZ*2=x4ϸr[R.;ZŚΦK A ƖW7xL,(|HeKr &Ѽ-]Hƥív7?i4lh]io}aPw8XFn"nΝ_f_ARɛ A@`鲤ke഍JB`>c qVi:0NMƐX! @ki5M75}Jyc~TM'~pVooS +][S =tI%xu ^(@K':?EZ%OҨ+Cn0Oe|ہJP&E16A4&#W'BB" )˫H aɲNk_`E%W]`6F?$j]KZi|D1ѬQZcy,[;S6DV>&-@JCw cU;ޢR, gc NZƌ 6s6%Hp!lk9\MHqœ2H-[L'4)t=B$u2r.Q[zc9~ }6^>ífF/.V $;18 :j 4zN=BJ\h)ؽ~/NQMAm Y=c~eZ Q/ah~ONԈ`)ehwq>F0|NQ(}g/M]EZ_u*>)4LC}TH1{2M.fLQEr6ӁliEiu{Yʽ,*t/ۘh7;=0Le49sB8dlq>"cǞARrWAZyJѧβ+THU^!\i0+US+]2!>H)uW 8ĮH`UNHZ'UrsWW`*9*Z8 c"%k+ņ9mlf6Hm.w׃,{ l,- L29O?~`6]t&zckx쀱CHVk1J^s[cs_/FNƑ }vbX4]pܧ8H~U#Y=ii& ,h6|9o◻H%gŻ7s95fs^_wn !ҏ} ٛWW5ߜ\xvyI>n_}r 3{O6T79۔KrKr[Zl7~T(:ļAm5+pS3蕖g^FIk7ѕXrR<v5 f5 wg^hyt56]]ɪV=|hHW]).Њv4ʝ9+`teѕ'hzD?huEI;+Օ{ѕhEWF]2JYuD]iӏH ʀѕfteR}ve)ZЇƮ]o3h2JUW UFteѕ]Mv]emV]=RxҵՕs;2ܘZѕҲ~(aȰH]E1U$l@Ol7 ):Y^2ܢY<򒻍.\_1lO@&V6:PHg)n:ۿ%!01ڼ<T.y2W~yIay]AV q 6иҦ ]ڌ+]Ab\k|wkCA5+;֚}׆Urcuz!M6EsRoq&-c x2Z{rFz_OO.Xy|,40ν'4^OW(wW4AWi}|ҕK;R\q؊S2v[u$B\)LnFW sBFϮ2UW ԕ'tɮ ]+2Zue;2ZJpoEWF+v])8Zu@]o)R`]@+R2J\Ǯ+7+FѕR3AJ)UWU ]hHW{KW+ +m]WFס%JnZ]8b;CɮVv])8YuD]EBW?=Ͻ;$@.Pn 1Q}IX ଋNjcp'F+?TYu4"4Ԯ26p}3cWJ'FҪJ<=šӕ?#pS3OV0Ԯ+j^21VDh wrBT2W $bɋkjTFbXy&h%oHeFּSGu5]3 `QihQqp}( ʀѕFZѕb]WFsZת ) H ѕznEWF˩v]e +%]P3R|+2ڹFJ8xRgMԊy(Xu@]vO{hCuiR32Z> έCKs`l3h ѕFhfh׮+3D]EN!])v:+Ԋ@jוQjk8}0 q $ZӸ%;uw bKX\; hQd _O@V=:>$`u5 w=yΡFY[7&V]ݷ]$hHW ,pQZѕ>(#Zi7a| Xb3R\qhԮ+ iu1 ,܌d񘼈gb>~nֻ>}7݋o)w>)IIC}Mms7߽ViUޱVew?<,,+F^ ߿0mqY܇u ~Qoǵf4_gK|zJ8{u͞no]&~x~zO^Ot6mW?Ґd ,z쀻D]7@ %ȞG">~}{3\m}D{5pvпo 6Z ٛ~=.e1Cq.}A.*7Eץ<1PϹAvOFfܘH @U{? $9K0͉! ),YqmֵGm^kD} hv- +j5mRB4BRN> R䮏V'`Gm/ >fאЍsB/ c}Swn^.Fu8#j::X ĉԔDŽi4s5Jft0hn>ZzG֨a!)9(e>h.~d,^/y硌c5O\}z#nS֣2.wCY뺏c׳"4^^ATl^X6JǍYְǮsD^h,E؏zy5^KBP3VJ1I<&Iz@j< GE4h7a{^0 Ve>AQ׋6 4Dte}`fmL‚M 7C5Ҵ+zh>mjiK~Rk7E;6*P;pLX(idj"M#42Q( t΋;vF[^8UvX=G2$ krb *qPBR5pړ,P1k/AD6b ۂ2?{Ƒe /;L~0$ ؝ &0 议IEAۤDD]u$U=Ot/Q[۴!LEs6V(J892rNj e !"%d~N85 R+ {/NZ#{U]% ޗ9&W i&& ^eCu54v^KvJDA >֐ȲZh .0zUT!{V"t!gk4n!:38oAM+6KъH3fHN*L^$vNR'Dhk}riL|Jӫ2_曪8'S9oTA]01].0A烄Bka$ๆT^А-JAGxHT (IǴ(yCs ~7XQ|E .3(rԐ$&DNk 2Je(6Q&g ed/4hFՐAZ[x$nm }ƷUBNe~d}%2TU;+Q<ͨ&۔ 겜 D vZDn*BUn<M F5}4 ][C@F4V#zԥugC^Aڐ)>E &tS#Tvk ) aZePEx(%%lqkTBѩc@K^o!Bɷ .D[vH=tqCJ5Ƣ7BwD"4)6!.h]`mBg$1F$hq)֐  j-vGyPgU.J {a!4@ dƈN ˉ[mZM&_tҽ,3GhTf&-;Tjt[+2x"von$l@6kT_]h+-)K- 6R@5zv37͋9@y?giY\GImj0hvVIFN9zmIEj,,zOm3NZR⼖HY]m bL=@9hް-*3bW@s80(QC^":Hm:6-9]^܈uŹE^%\p  J`٠ 5@Rl-HOOdCd ==`-f=*-KB|yE!8i,NYAe3"u7(oѭbp0uC+JQYPcDԦƢb$&yawg=u @ [6Q5(r j hNmV;Fn#nF+5^TAR]%TLQ1eT ZvqZ7[i<9е Qє?jM oaژVwѪAh`{`m+ (AF(-lk䋮F ~q#6`GrB(TJf4q4 t W x@.* w6u[7Iӥ!D 0rHMBvljC;7)"T$ 8D׻ %wR@õh$dipO(m*F:C+δy0B]pB{ZU/zY7߼իqE&U)+8  Z PEy#ӷ^m}֕oP*JԮmh 8\4?һJR^S9}Ny=z~6ϖˏ7}o^\lb;^./g/^HOgZjRi9]l]<>nx6`Eg[o 1/'9rҿx+C Om%Bߥ?S P2 VorZ/O-;^P`}m^ZCŏ'?r8:y,-rBxBɍץ~N>Z*ԡў=YN'{˷K z閉o@˴h]jL;yҙSR|2ht[t<yڐ5~;>mlvr2j=lOXI_Z#ڕcWk215kt&4\UJ#'c{1ЕsjU] K+[=}n/PnysȂ RX ]Z3z"N1]!]^gE\.k"q P+IWm ѹ@RNS]7c]wӞ?y,S9ה+K9EZ^^L<~ O3! U@lcL̆x ~FhJ_Uz{pF>~p%cLMSu*hVSTms#s^Y}t#榗m Xei$6N[.P~zֺL]붫h+k2?>DmOpu01KZ^J"IA?s${ǙJ/vKm_У5R뛟'\o5ZUۋ'+Ԗ:A_.?:W./OSY'*nv5zx u$to\{2y_/sՎw:{BHylI&/-AıOʭxrB)sfvO70S:s^_rA􈬶fgS[ s~yA+NJ{%jX[-y=4qO9:j"gnVm |PԳ\yϡKտu{w%_z> 蜨?8[vO 8t`WVOy=zԓgخGV [y=C6{ar"4HZݣnrA_wN84mZ=0ʺO狚]K=02ֳj2Li#=t}LhlkKetӒɣ~B0+G3 ٣b"VK3vD(gt)X'75ZZ=x;+.M ty>gZ# w' bA1㧙J e384{!={ -j-w,l_,'cɀp]Izxv1ឰԢQZA5GSx6x21x:1>6 pASRׇROI%r4ҕFE[]v+QW@kPA?F +sE (duutDWlM9tEۈbAuJ%u`:B S`E1t B;]a:Jq[lo|FL$J*-nν}O+Btuw]J(\}1\|`Sa a`:t5n4xIsc[LWziC "B|xbtE(b:BSDW|x 6gz8_!R]}ozȱum%F"`V uV<-J.O2曪*.th{֋BR 3]턮^jFtƳ+l+R] ^s+ ]!\c} QsWC+ ]6tEpBW 跧+BlҕKNvhg52u".{WC+oZN 1]\Ϧz 4IW^`r'h@DYk41r[:-cw#\ŝ  I"&+hL~Pjo@ꍘ =+ɞPFUWDkDXVƈtl0+ι!ҕq.`DWy+'whC+*ʣߚ'1n0굣䶆M/HZYx%#6kεiӬyYZn%ͲL(9`$S6Wx{1y([Gt P_(|C?Ok6 frY;Y.?Lbr.WcopVjn?p$o>03̲+э 4}1.ru^N1|w0wayͿw"rr~|;?O7}x8e+LA*A0^;]s|pM=uV/dx:sfsLEjI1:2?,ǣ`\\4:[Ҳ* qʯOP\N,plWSaV2-E`zB0ALWQpiבhmO,P"+tTK#B^)6tEp{36Ը5eju˙vBWJ8yW"jφf]ZgR+B\ҕA?t( "\ ]Z<]J-3] (]`ǻBVx.tEhU lt5`v -µ ]ڗ>=]Jjt ɻ]\:u"2] FpJG:F l Z)S+Bs}t!#%>c+!lIHeIp!ni.JpPA*P `%J6E@~tydG˩A* qgkEEqhuOq(mbzd]LWOU=(egDWidžNᷢ+DNtHWR>]!`H8*p+BkmtE(}t5@RZh~x7 Dž 9"]i]!`{EQ\ }mǡ4"hvz;mKWqpBW6h:]nKЕ5]!` bj'&y">{WC+gLS`(\j:]J!ҕ7^;͈0Z+"Vp+BB|!ҕ٤g/ ܩ#%'Lk$k$;hP#֐qYkC,iQZ'Yfu+Ap*"NghpGrRB侚HNuT k|t}& n'oZ8*<+ꩪgDWX^W[.tEhLeLW+xVBZ\к+B LW;+,bDWXJɆn'hz:]!J%w5DҎ3+> a2] Ar]`ǻ"!p+㕐ejÅ3]턮`2+Ls+ "6yPe]9 ~."MК+BZLW;+iSH LD7ИTXoTNf>XMX:SaAD֖@p=H~ˋPA< ;qB{Ftϖ l2$NWRLW_ ]鎪SO\; p}5wvZS8!+ꩪBI +#u ]ڐ<]!J *J `DW,Qp H.tEhM1HW]!]DEW.tEhJt5@2`88++p+B+Bis08Ds {lrW6PZjtq+"6t>K+,` ܾ[ ġ JAҕFt(1j_}OvGmC &19hLqL0ZI.YtǣAkZ|4(-[gpd/@>?| )M!\)4H*H=#FHLG՛PҸ'Gk{8qRQ(1ZML]LWOU=(+`DW`CWqhMejM2]턮 4#B8Nq+D^:tݞ4HWJS6Ft\ :!R+B)UҕN0fCWW .tEhj!j8te*$>t擻"wEh} 6e v±++ ]Z|P$]YoDW[>t`\ցJ;C+tmz%:/X,.FGrqqq.~iU–WKn$ʽ2X3K;NFvx`OK+*غu?gWE^"ҋO`l 8y\ޘM. |`;ovWY?|Zu7|wU=/NWjQV_wwWG4݄m!HAKym8;[ap.H֗'OhJ)~Պ8B&Fތlx'}|5w{VԥOҙ%Iu!}oy3  D_HV-or7ʻ6/oyn; iz=bruQ!#]NE/EzMȀj[(wu^BkGFs_X?>(K nD HknJܸpmN[6uۗ):=boX.r;9ǫ՗_)m"gήʁ|QJRUwA_P9>n})Rw\_JgV1ok6Ct4=W?_W\uٱ'ˬz BG бU-RPI'og _E .+g9%窜P03SV Uy+rbB&Qæ$;NWy&ٶn uȝut֍,,QJVnYɌzwiKl}xLJrt;p&[ZcK5%:=Dm@L5R c ǶV3lixh[mܢu4ב |m F|GA6D p |{+\ ,z:"d]i6Y>Puţ,БqiIg  vKg1HQ6t8[|o7w "d2WT\6|- Y#*Iqf]Uά,*ZCs]:;slcN osK7o=fp춖ǻOVG.{8ߢ|N{3b>iDk ~nW`󲍍ܼL?&g;7'w6M` e&fܖx_fo'LOSCӣ5>Z#XPMY!=tF>~>hM|qY6(N;ȰɄqVˇ^%l"(:cNIw%a:Y].Hߍh Z\]<:BrS!}[8^}v7/mgϒ/|`漻4nkBҎB|G\1P%i)I }G2iNu 8z1QJyOY-s2:>}B}>g 4s*i>+t< b}hsqsNBcka|VepҲ+$8wQdtRg6bĜIg'4m!S!NE7|_ā0 82o q|F)YnB|8ulvLV`x(zEzijx(q-AΘmh:nH;Y 2qV,;e.,Og%|6uZ )ƧJ/u0G=Jnh i,Ȗ\venuʗ}M¯'Q*)o|z-iHkXn;mkEeBohWN–dĀ+Q-Wb^-+dJnJvķQ̚'re`˒U\i //zDKdwp,Za٩x "d0O*pw&2-p7J-n_ǹrl2Y4Ns<5J %#PwXh^-x1IAxxMĴx"u]pMV,nh{'UXٷSQ˵ /ʁ eߙ PS,U Q:eQA^FR_V|3{s%DN0f )SA"F1&3̥F/B'~^h>_ذ~# EYV!UyD Ƅ~NR *1{vB ax=.O! >? ?qOa֜TdiM[`ygm4fiҟ{mTh-\p@ ւ=.ʟCU 'p8G03? +VEaHQϹ!0c,2:q>e J@N'wް'#2(zoH3 P-^Њ*X?`C{*`"\KulEw,+-E׽a<;Qx綔p,29]()$a5.q͆.r $(OrYi{O5 f $X HhƧ 4&UZ6f8CIp߃N?T{ Y=7B|\2"0ijo-ۓ+0z;+g^mG=k=vȆg |fOU0= 頍07J1t6{ኩ#>ڬ6yu}5^ͯbbR/0fs蹹\౜ʀOYzP Lccp0hPeICbiۍhxSNiRjK(g0dVo_Es)|Ʊ)Svs5ys#P^g/ ˗Mhh4U,ǘ}`'0Ĥߢ ~JdJ1B_ a-3f`#%̢A: ޣ%6p`V_G:- bvsAO# Y֠핱 V8FGwGz{rjvqH[qotgb:LsAVJ0K1-L2.]P&Ef..=݆"yρ+2ЂY'+)i1/GӛJ(TXӓ KC1Q̝ac37 ,_ ~ddJm m"WLDF&) I]0LDh;K`ṇ(*8pYpt &'CYK3Bf|^3ͳAIQ2< ],]} 4O~э gs>< \Z )@`NE.AkOfq!Y;' #0*\h"DvK8O0uk*]ۏWG7c#ļ97+O@VR -Pɩ!cs K[l V}3`GR;<﵁(J01r #+ H"y?w' ga)ײ"3vU1̈Ob/ t9h_BU[9gC 1Mه2|~7y35&úv2IRQZh0bÙrp9%OckyTgRe g7'LB0?jm؉?A'x ނ5~/$hPM܊]0QЌ,ٍ Q̏S,]Qv "{/\C q&xjo#i%m1ͳG`"-, H<O9v"DRt6Akx21317vݾ=XBYRH4 P(KurHqvhaŵ҂y4ufr(73酿VlRWpIi3TB<'T0c@%4)8H 9V $k̪+犦ssտUNĒ6ڕ/6u XH{/c3Fl?< Z).ԫUZRyU"m[zÐ =_@E~IxV U3PTT(Xj͞(fAn]EZ%Je=w"{FM.ƷdwUO8s]e3Jv}%X1DAɦ8y-6*xآ}&knVW+sѮfݞ:cхUE*ѥ2!ʖ;/ +k?d fJS/ģ[w` ^?^Z4H75*~ܴ٭7m9ƶQg  mسIm}oee'5[ƫW'y`ުFog{QrII3+:Y+y[nm۷M 7a3g-ժU=zSyJÇǬ oE Իyۓ3Ji% #+\c@TfA 8ų[Q1: JXfÆOfPKȵg,Nj~z"A-"=Gm(ZI{e"fbK`ܖ/e5A:oaXy[wwxZL_}+ذǕE^~*?~ߤ>Z~JRr" )kN毆 nIfH"X9n!NhhdaT_0%}~}~RZ6ҭgT0.9?Kx[ICWZ &;7t+8 K*hn3=yμ<4w-&˫.?&|6 DC֣%(g_^w_JA.۴g% rgnQ|~~=] a._)/aB>73r5x|H'/As@0ӭx!dOpku@OR|٭Qsד.$nӒ,я{([Iu7=b^쪮wUWWmởʑfsÌۏ }dhVAX?0JJWZtjD|%vvU"w(7.% N60;Bj.'M4]H u]gPOBFaKD^m5a+Q U/zi µ"&\D(XP-I 4׻N7X[(^}ҎդGP{!IzUkˇν'O8vS^{]|g+ ڵuhxlv8T _𣼇[Z9 gQ?\|CxPzHQ r%I:ڈ$ޔbRE9bCVGw*h؞iꭖtP:Rk@<5ttAP f8̼hH,pqΝ_{f.[kE- Mu.) mc H}gД[Ut8ofnn|7f\鞪nÑ@4cHE"U7 77 A3ymWٝ !w5; v>KRO-IAC[a hi|Xi~iyqf)*Wv1Ks8y5?-G3ت5 mC9Wg?7m(ϐ BPd)J[`.X P%xU^"^uD|%>~UӨ c;cQ3z~CG7jڏJגSw1PNCgj ?vPXDAF\>w߭&8ןKT(gLJu1둃Q` )S ^ <:Jfpr{X1Wn;#9jO2;Fe`DR8y2Ks{>^T/y`лMJ[z\p3O_f0c3U\ceF&̦EJjy2Ga͹8RRfR夺#m74CkWTr`hhNL6_nC6O'˛9"8t4wةa+&nlCe',M$+L0jEҰFi}KI]\"˲^mljG}z {K8ĺ%C֞XpӼ3hɌ$i.IQa,RZ"iXcG .riQ2S'RiդWE#tUn~u=ы|H(1A¾`{K%bq"Z=1A>P^?0LHtoqgG*0HR$eL~Re50&~22FOhU.=sv u~?\s֤uC# p~})'5S%A#G!`yn}SqdQ?}m$e3CHm@&Nz>K.fv.6_g8TFQA]HsTX]=1Vj-~[ AP}"1d ^ZЊtY{>7#$wxݽPpQo[.SUŨx?}.kg=t#Ry0W҃o _(?+%|GΉpM Z!kkt@c,LʫeFwi_쭝T|pefOh'W*6xhf*UPubD4U-W(1)6)M9S)Dw|\t<*˫GN6SBUZ,F 1`f>m]Q좈k [[n\]{YZXtɭ4NƎt e&`uUy# #&)M,aDq%52B\&wP'Xu^vҸ]/Ȁo8ӊi?zl޽kqvNQ;n~|Aɬ=bI2prLNDc l),'\(iXcÿfz7))?9+x!L2-ѷY+ϛWU)NDH{ZuT_***|Sç4v!jE$b E-Ash ֱK_:0iuAmvr/e!kuox%ұE|﷚۰ƻہR"[N?W$l5ao^SZh zdZuijdPdfnI@}\WL ]=D-,S=E}Fpt\9ܙe,?ߘ#ԁ4rP^Y_?Skf toгV `G?|MMpk*Olr ,5(O((S̒rNц#!VrQ?=IOLGM%oY9+C WTN2Ḡ9@3 $,ְ̋Y[ml+hhnS3vu#,(.H e.֏'ͬ l {Mkc/K588;u3aיִ$n)ٞ5M XS/N*Õʱ]ئOb A=L Ck2$$XATImX a)ͭ߱QBK49)&jCҜ"fU6jI+ڤ "'gXjjH|TRQ-pjw/4K<^{n}e03Η~e?xb</.j;lsFMwV+2F a@P  B ~iMiSAqqj쳋2;m >9eV[~׹{3}H~[L3Y&raD(IL4d"ikQhXZ*b>(4`?ӫdه6q_~|DTUYe!VګBUZV0WbA%Oڄ3lKO[륔FkE WZRʸ|mUJ$hx^ M@$6a%&#P` Tė9kwm#Ţ} }Kdw_ɦ-[e:?ݺH( X'Fі#kķfkJ< LJdt=5Ľa0b#g'čHcQ%px pmբnHL|Bd1_(gܚ0t/ fn~f[Y"7ީ$ J/H:&-#qt{fQfnDF &n4vMӍBr=ŐL_@8 O,,%H#ur8Nc"}pf:eEL ULaIBUi$J{5=e[̽0)Kn-z| @N9eHAgjA̅ʢusp}∮pIb.IVhGRZ_%4'.\Hr|Qxͽ?- %H t/v-|ыӪρr6 Ԫn6f ^p4*(^g `ן g9TJ/ɷ{u9?}^m~iŅ0itV1mq53DK-mS(s|l? v/dӢ mpԁ>p$w<۹1u A)S5͢ AU]܊?o(`j}T%2nA͚/X.sl~lWqf=U …Z"ռE`W.8S~^(RTP&vj B8nD;j.I!FaJ4N|^ADBKdߙTJ$ !p!)b136_ƦR˝RO%Y(µ`QDeL?'jǔF+NBx #ڰsػֿDzؽd!⩜K)DߦZЖɻ;Aoyt7{dcMPCE2 F4"#?[g%c 4 P!4C ɼfdi4L]g'|G*-#2a~bcH(AQ%e^> BG4ǯ pwP\=h4!Xl`WY,69 io'@q&Kʎ:g 4;Z~*`= O&z^rYjFiLB w8qyȆ.}Zؘ8WhpZ4*W>.5APlC?pu tq'2|( xiVʏrNzr4g>OM|gIB TSB7Ks HPڳDF粠₴7(W`4 HQb:\=1$vNEExc%cjFY4z0鵉+nD̃W5`<*L^Od},٢-,HdJnV]tz/B-qX8h뼋];׊t@[wJB9;13%6DVYnMf 369m;(^4uv %k-bLI"K (%aoF)GR?\tXQ h R*Ch#v?&j@AŸe@oV0Yq- *T@Z@t'Uw=yf"*leĽ,AFQjR8Jdl\Qw֫cB%DġպwEbZ?$`ZQt,S簀* GC%(u)pN?wYsP`p @S]f|e#C)0r#bDbx!ZMG$E͐sݕ -SsE`;>TMȸ5 ErCJCTe].wʝ{'47øE'tZJ̸VdS3ldqXF,o |xla>p4!"P?> _o˽sσ_gٗGw~y>Y=V݋S۾?Av}~ރov6;z_`YĕR &~#^~G1Nك|5xNPͼrŇdiz2I˥/>?8v.+w?䋥T#$LIOp%8o"kQ8VL'AO\FeֹH{GMjA#eݧ )qnzX;$'rvѣ ;jwl n|eqN2£'|8 @cz0M [& pԘО&IC@Q=#CU(i)#;<I[fqpkb g8^X3{|t97ƿDO6<Dڡc:iڴt*\? y Xcfx4#jԁQ}%VPJ1r ۄ ֿD '*g7!L%MzԲx򴧪jFAo*@ZN;nr}#Avڙ_iޘQr{ -hVBI-LEVarcO@1"ZYUĜ1DF#"ȹ[Ӥ(=ǀ'8hݝԍ?V[m; w,X`F+b¬N)3;D{d(oVna|"K[+F,+ wT.b-+|9 g)gBmTzJ-{¨;`]$!EnR\a F%2B¹`n'1[*+ ]_iޡlag7;h-2y6mBG]6Nq"]e*B=0*i#x-^s: ;e dЕIaCkG i l9w `Xys/܁>1ﳑ.BDƭU, oWQZE}MV&7jp 6\@>6cm= cƕ :8KA,)vV#)F q;/+mR)I=f4ǟS—'W %[fW5X>՚!/U*Λ6 k,Qa% BA(J4ɕhRHKS^=˅*b':'Q5`ڶY3MEfxVl ʣtPV<*ޕ$RNc<"/~nLӒM&)jH*I,)ɪA_lteeDfE5ײoƁg+%[am_|q[*FVg0qy-cpz3ET-3/006[`H~rl20Xup\oAx&nU(ݚ;Vc< S_)WR 'K8=Dh'>{%9z7Of}1KËj$/C#/vH|^u*;edYv}}3;x;=o^?UGHꡭ#OBupTc5=Uǩ:]ro~⡳'D>|6<2420nmظ@yLpm" O,QxybD3\<5>QFY9NIXE}sez6.mvY|?#j $TЈƉ}kӷ_^Xo կ{܂{A8 q@|\d3\4J-@(7c0^" GG%gtpBve%)56"Y!"rZؤ4u-I/,`DI?w;t\³x7^8oxȡc!tpJ9'I}ߋy( R-ߨE 8Vy}2qg:m }S-+WE,hoW#q: =UtŘb6'wyS.b:NVdZxI7PJ^z./_>,jYc @Kh&8!bѿp _&6je$Ћgц(I" >A?.jM)G/%EdmdK:ǩ:mF0apO.!̾+8J%U-kmO@v%+Tiӊ PɎ1ٱLqy`EvtEPUTڲ xGB%>gu𴬱Y<Cs竜n(ajYc Q':PMndA).Q )LF$:, E5#|vW_%m-klPO͔Z~_Kp \HxC1qr) f$C8\WY|P@p y$\PbTtʨKw`i9C1]i ^xOq⎙Qn I]r| ЖcE;;?]'d QW+۲ƶs6ɨU2E[-kl0ԂKΌ8t+)zmYc_F]|>xTز#)*5cc3XPXR%jnP(0(s+Ɗee̗5RFD}hLI-Q1jt3B]e xL5kM^'3&60 0iTve'51AB1Dg]aaz빜vۿC! ͎`VJ8LU ] x_|p(ײƋ{[)A;PߞM R( 񑭀Q4 e 47PWpQϯOt`:W28v"%@j<"]5uƕ);;/ӓ釃#=4Jiܫenn2+0_7)Z_EJ{W:Gy~^.G'*߮W W?tgMb5(G=\@{Gd|I.xӃ.Dxc! Uds4 ZY."$D5x4h_aw"VeerB{/sr9du'1Tq/\.z0߯B׾+5tc.>_"h5 L^3FLpw21YVdTz̲8p :gRM9P`|̲YM/<˪o qǚ !c= 3&n3mT8B hn',p )*C@J$ 4`LqB]_|3|1gScS_Rr–tn0e BcLq|hBc]w9e**9O78+TqR u˄zK?Ǹ}l|wȭŒc9ʿ;zk*5׆q魋K"q3:\GʎB@)O*3pU։l.G)y3WS)bc pAT+/ ! 1$4Zco6o$X f1~3oELsCnRuj$:,e` /u (IoKg ayK2 OyΣWK$DbVAIw1%G7 *[r-ٵ@h&C | :c-u( mG|Y nYJU28u-C$5:;PD:VDc!iSZ:hIK@:n&yIu1 ,{%H.#ۄ@Zy@CZFivaEnDc2i2ޘFV!f?Z.5dDX 2NfX 0x͍?!=u@5-ED5<(!d@ґBDg=:))s煠?:8 (]$,'H{f[尒5pHӀ'd i `o$E+/&5(ωtЇSy%>m7ңEQqu)Gf1“N3j6%4Xqி'R( GNc'Aɱ Z4rEgNP9c4gB&RJ+r h*$1N{g^ꔒe@ g@ߊo @=2KTh,ʀvAJI^]BÜ:Ohqе*ut(ݠ!^ة m秳;Pt7{z~n;9ȱ!F׸\}|!7f'ξ^*rt+4ONw~sT.cԫ\? ;"uYsNc:Sםkk%׭I @WilV%}:5N,+eljcYʹ+YLjQ&..%uX~,=g? nqWG-';tZ?3GrO~ nOsow\W3嫓~<5诼zuy1*05K-"kz;sRڷ^k׈ jC)ӧ?6E;A2gyQ`=bȡ-L*u/c)B)ީiϬVkh1۠"ղ-c h fJJ.oҢ1j'#H/w+zИVdb)DW x(_X)%sbE_(۬&`iUm^S:sT`'&Խ O}wVk.Z!$Ԗғ֣͐e3MIbs3 (e\Ahw:wy?fye8T_ p'Cʨ=2[ yэYs%|>VP{smOdՆQJ_4j ũ黂;9*YߗA##,(UTd5cYdȃ;?{A?g3Ug mo?itgD3f4*>h}jt g[ь͸Ay"3={U9g)9dVXo$4nG?th휮ڻ@O"`qT&KӳM+ϙ뗲"'&Vc>v#bU VyJ¾qWȷ 7#jF2p5 ~_恲5F7GBga޿[hgbas$bX쾘uW;D n>ACb0۔k4'Лh7|7hM7DEO ~YE( "v7ɌĚ3Kb1<V!+ܫ 16VIU$~3OnGQlG R )ADf7Y] b#޿yqj4ۘ_}_ڋ6ʆ磿]d3lv?|{`hۋp#[C>Mku)FzUe|~''*X?;co$Ks"T{O;d{(™Q獵'L9/9h(ƻPzܝy)nơT8 ._8|a?L{!dK%b 9${9rkㅨ_S|WHoyDkцh mGRÍ<@#}mjw;0ھpdDͫ'יh[C0^hɲ:K P풐3s:- DӧNX &NZN )qLR0^PlDA Xko1ڻcmlL"0f9LK9YNYRsr|NcSL=f0m ^e_CMi`m> 8gmهzK7@TqIϤz^T;Ig*5Zps-&^"hB IϺ>Q-tlqmS`.=b-n}lElNYW8eew'1L%׌x ڌW\D r+lc|pj"cNt5y9Z(Q.edw 6MUUc"l40-qk'4dVZGmGί)iY)KIW.џW8׼+( S*V"`R^K *|[N4P<W.l|q9!P<k.tK0ʨZ ,؇rb jyaTŃd#5Y7H>ՃSb7CP^7˲C%!THPdeD*x"v >H^.#TBn뗷Л;՟^\_IB/}~T[ =kOJ5amrqdbhK0ߙ/bGY֐onz$c{;}^Qڃs/?5h8A ⴾ 4 R8/0Ar19A7/iQ7 Ӕ5ŐoFSZkPgPMyI9{on)-Un'{-8G|; MO@'sd2W𗾬Ss+r!> vx3p~ƗdXb,KDQZJimD=)cAjY붠 p^/\p_> ~39V=2= (.0Hg7ņf@t Aoss2/X4݀v׋>hwOF*D  h7#PcXvڡ~o 3cβ;rP*G˗GF@6;].O޽y_',灿u<5J 23Zak5{N-vɅ/ڭ[F^f?Ͽ<ƫx>✾~}|tszi_}$g*/IYN;eyߕK,B,iџ~Av9x<ꊑgASº˗bgv0\;)UU ~j83{3MLrՔiʷ_+lqlͨЃrqAW;/y@o}h-Noj'KRUY8M5b"O~m5OE|["Ėt6X, a LCOLPbANz}qa% +aVapw[8DŽQW ,"O8Iv(<7R!612 XDCJ0W2qئoޜ}ePTIBL@|JŮiB[ҫ nFgB\t{dd`m܋x_Jtz&g2IaAehD]l:Ǣ{:UvcGX$Y%\AKcnj E*y:݂·_un+$oƶ;,nѦlI6aُBǬrat?-:b/or7[b/ϓV!\קd?瀥taS2wk~nS6=\P" )H8phvLT$JNnƯ7 TGPxc%y (>d𥡭ba_Wj2Oj2Y6F r6^bg>ogwVlQB%]]o\G+F^Eb 0dv_ݝE`ԧ,9q>VKjInݖcK7Hbt?XEtdyoߵhGAՕ@ :JOe]lN-[*Mgy4@>ڈOYuHVjup5䙴Xq :SFvgMTmV-XYGgU x8w-~>F'j杴+rZ(P$)xҰ D1$IH5v-!Y~;? ![ǂTHb%s/sT`dzfMa?=:QYٜrwo-[0!(^6o]Q@ͪv>k y@N]DB -5m ,wmE 34R4zE5(&tRFMxze Yj,F")>las &>MG!KqL d+R^bH75[>=Y"r+@!߯Yk\K2}"~+7tc#?Q@mF̎7ГQ/& ^sR(gG8]ilS 0rU|~%9kGL $[GXt⫿`|gK~eh >{)iF`bWmuNbuQ}As"|nxWm)oRHߑmS鳑Ђ # CiRHF4QUUQV^ ):d:,S^2LQMt8 OgŇf󍬋EpG d QR֕+*ɦ QyIBbss fL)PojY9{)NKj>qvDS;=T0evyGn]wbž^>ؒxyrb*f9U*#͈MbLgcˉJX=E6<2 8pI:BWɾJqC {,*7z^T^fI]lPr\d.VcӉYɤNn'(awBh)&j^Sɚb ߰CpIr-O5*-͛$…b]RbLLPHIB&C1D9W3{Y$b~{F}c g}t[Ŀ={>ңS7-ׁ7!Ί1hxw}]@GDuNngR7^ aH%3Z} Ъ6sbA!an!䦼z^>-5uGE;l@Yq8Q.zٛ$\ 5FBo,/ň^;f200ϫ!φy K(JyJSJְddaHm, ЌڭБ^A D`|0نfz=ƉV*fR)Ć#?FQ7a%{GKvu3J.OvHYnmag i]zpg^o1| &D*lqBXm>جo|j(hN`+gaUm4j b1؄\JI!HQaāj,VJ8KSG'Q>;oEMCj5CAD,D;-ED>UmΜzwriƳ}Zu(ު>z@;<P`ET Fwnojc@j` <8ZE}!Ǣl[EbgF< 9hm\E-Ǣ&گyv3f>Oniۨۏ&{MOvQ꾿Ay:=}o|s JQ:>Me9pM.ozt*^Ӽ˿OR㋹|^O2ά#]~zQRn̞1wn0gGSЮY[c=UUl1pWΪcn}Jz=VHyn]x?4[~xzngVߌ;k%<ؿi/WufoډTPs+R7 6ʹpX=tr] GLF+du CcS5a>7&`XL C[7yM ]H=rS|>l+RTmE;R]dlUacc_!ⰻ\\kՆT.W{fD"wUq|HSgbcn AԽ;s3;~o?.!b_fm6qX?G/NW;㶬` /sivƊW#AdB'DDP{%,0cY>GZwJAM-O/)^=l [Vƀ]|—FMDBMSޅr#ƩkJ0gEl7`cV\#OT˾l^6`o7}>p8/g㴤/oR 1DNY\җKr¾S{QΔQ'eچ9.w7")#'(O4?sN 9ҹ9 u8Dz89{e֧;`z~o܄ dqTLv5,Qق9oz}fE80/]kvJ.[oLE#:/]\T>5|hT Xd#p0Օ9  S]}c1r/99dv^>R_ E{q9Cu̡S9u.nv- ʡ]־$h%8fCq4ڽҌj%I-r4ZrM %tLqàj#=աF nJ7op'=1 2mFB5 QSquh3 *p*\^)\Z+ 2:S=K9# F>0GD_)FMed=,HU|ح,m,=_zaTK a#'d[T.kh4ՂfIvA;puUUs;`t(ooT:x]c&2"|<9z圛~Eq ʍpt>"tR+Wр/ i%2Z«4+86URC -$|TZTVn Bu[9uM%H 'rf?}z[ෞ@JBXm34'8]^jvol*DTV!hdKR?KK{IIzLs4Sl ! "qhRI%1+*1qYb6RrA>n$ȏ|ٶ量_M)NW~bNLz:lzoM}x+p8}c˦?;m/9/$gGs%gs!+sAD!T KfIL7EvI|I&~IoIM&fDX!" z 2]XƏ{:o9qM5XM1%"iDݖzMTcIS\#^oZKEѯ.91{j xؽȉXT9r;,ȋ/ElQ/n]!Pk{/׻ڦSQIYPh5.&T+MV !`.!@,.( 1LT]~%g/'`2\HX*EVUl-`1XКR{oK-aNS%ݘX ) -SJ\ur)@G34r~P bVthȔ:)yPsʂPMvFM)-'R@!gй$vUQ;9Sсr2M{X2zB޽tCó7r]lawʂNZ/ ,gPGY1 |:;o8ЧS_痧EpqD{˚~J>MUɗ1HXc) fAfK}F!Ơ- 5و) ZDD(225>Dg!Cy찓U?PJPzլ)O4YF1Z]5$ׄѣ]+޴kcm"H8[kUqaJt}yϺfA"ibю;Mr˻9TŻ|эh4Ѵ8pz_8Cg kb13RŎz$?ba֬ rR?}'N/l{%^яj#aW쫒@ :pvE ExQyΛϕc.SRNr>YwbN .lNseՌV??1ZQcs.VX6ؒz{B$Ho+c9h4LqZ{_")=kvsv͒W x6<%&n9K6&1zt5'n=hkGYAM- J)E=\Dl̢j"clԻFfZC (j0)h0rM-ܛ[,y5-_! y<m{ 0'TJ:g 9M)rv#w?^W@uE,'D,(N#fQ4xղ鍘ѪWbGԄ5 oԹp87ukłMrHW$T²)ۿ&AW Y#%6csjb(>g[t):MW {/:bʾŤIjEh8?܀+Ou:CBj.%2*jbd fl&_o)..I8D =fRdg5b})%lc:iak m\#򐬘N~겏%-)WF# `,<"FK6TbikA5&NE9RrM[.:a`K^1lԀ YEL Q.\Nd =7%:ך T$Hx~i'%lJ,W`sz͒W!#oqvi0T[@ j=!{ OX'sQ~z`ib#B,JhE )8QHS4g-MA >6Q,]v`^HY)@aB sȣӱb5E\iF$}A>Y.̌M5!9i3jn/ 0D/5toE;1ZܪkޮWZAd^8(hd D$aT=GčgKE[7t~´ OeK('NYS ^~hɲط]KuDkPhQɿĴ374-uT͘]zhޔ 9e)i3s KԮ.ARVd?w ]*[*U>rK:!ZaͰ ok04 Zc5qȓ> ]'w-1 <5S:})߸J052%]Ͳ5՝* UB6MhQ]ΔSK1y45n5C*+4VzXT]Г.#zI8dŕ/ Ö?zlpU޵ͦb蜎"AGҘRNC-NVbchtWy ܯ@ )U3-6qڧ{yʿWB.!(]|;tZq؎)G+(?;p= 1=Xp0(TG\Q~)C|\;ÉgЍ{WUE]ӱEw9m848NkZD#dbG'Q#]'sذ60Rm .oFjA ?,x%) Ud"] *m;E+Ԇg ;V69k2 ]Bؾ/---''ʀ|`8/D[?uWۘ~CvgX#c-N -N hq9u'~: ޝvSXGLx>>Oo<{ֱ6ʞ-˭q͌|;f]|Xl߮9/x෕YĤy@9vVtYǜY:gúPր{};x h3iA"p+f'*kRuȂxv72FV?9eqC@X}L peÓG?_9h^輪Vg!~קgOE>Wz, ꢌl!^?>Nʙ~5~hr6S<]o[r;)+F+41U&>T;ӜO{OڳC'͞—)i@ w}<ͬ6?Qg&<;Dogd&O I4Y6| Ȼq=^ #ϯ~B#{k 4ގy;N3$=3,w /=7\C͑#O+;!v8[e/*S=%{DuLglVhk44C쥥jX4пޜʇ]?_LGS_wQ%H B sQ/bĴPz/}/}/}/Cǯ6租Vm7;R[v 1WQd3gs%h?i~^Jc 6ْiA>h避rqٻ޶W .Y7H 4&=biOذKwcdG7;/RTyqd:E*iѿe^_OO0ZkU+RGiʼD3FP)aMsTB9¹?_|<{;ۣcGON.*E㔬26)u)Hv4(lGD.# >a }?&?TLs 'Ћw9̖ڏRMEbRmb|Llkyl]]],X- 6j13+8-F*ZPlVw(K ɕ$bZÖ0Ȃ̧10˅G7JQ:e3ħ5rxzc:B3LTLs  Y$:m"JP:LDL9 &?(6aCNe H/>$&Y#xD'dcMlۣ ~B jpb'-vr꓊-KGF6I|.:d2`U^˂$_d2t4(Λ1bpGѤ"B`dAlԇBh;.ơ\ -CM>TU SK#7 3\RAZ`WI>x8\MKVֱz~q $.̞.dtF*"o-M`. ) /mFtHUѵʃ/idd1";x_|Kf_3BT":7^BBeö*dU[زQRŰ3Y7mt䋱: u9e6LiA3] 9kmȁb'K'kzL >dU8P:B*[1W\>c[N>Z7o'8&bT oVctWWބg>]gcVz;d$#3ckF.bÏ!FְOA' yU(RЂNtZ_C_bR@r,Vz.N u˻8=k2uaqIҔ`iC@p"950mj'_\c_I t:bcUBNm is 5B[k8oٌ)-#Fb5\P[\]s _,^y[Vdk#@qnE =-# 浜c(hF}ƴCvҏXߏAxumfZ ͯf,UOo> Ye n$8e1e&UcZC](>~A9?}wpjnGGN"7g/)PvQbvMudKj};E뉶o͡`V[,choQ}|U95i-\TeqX~vE1} @cQP<~+ܴgc}؁ѨkY*- zZY1[mbQ(v7j9 axa JZDEdWh9>k7~wi m۾* gge je^FQF6?m3W;ܧdY`di[Um}-cxknxw_U']n|3ug23 JmdXRڴ1ے/jIi@ +йeTV>'-:٢63%$hY%4R|s r@pi=$X0-b('O} -& |,F*b[*,#$֭>_̚ 5A2 1;9d=WsEXX*ܱ=r6nZcvhE[GcT'Q[T["6`w [Wc6q9MdTFJR*UD`a[Ep˵ FB#;u{α;B,f[]7m 0MRINyKia;+|ꢖ$ i%Yqz!PG1mb6B!M- R(̜Ng鲓jlNޏ`$v|lIf|v2Ea'5wl?8?|4e̠D@cJ)?B{!iR2ޚ@n{?_ε"xa6P'4~X[Vee&oo_yN :0hҲ󄥫s6L}dM 7ZFkV%H ISB%-V@:E<'' ǜ^f?kn??8C>XR;% Bd$/JdE*`I(*ȺSaJR9 Ffjs3>s9̡: :>9?vx7zy֧H4}TnaXkl׼Af3E ^Mӗ X BH =ꅉdM ݏ`y ɑgJc07v+GŖ M/NhOկRQ\&oJf=tt +RIך=\ fQ:#druq*0{1Ruf*:)B.xkA|ü%!} k5Lސ! B;Uׂ:1vN$l_$4N'ZJZ8j38ᶨJrbO@޺} 4X-MzJ3WtJ.Q)2Z ˷{UMڙSW}iJ؆J=4ZċK'{>'aI8ӭIQl\"FA ,;B:ǾX^e'ɫuLhM̐UIHP(:1`&ƤOA*RifL0b:ڵ`nPOVmc$-Sx>5 3@n-S``EVj7 Z>-foK h!y5+h3!u0n!UM Rx&˵Zˆ]GJ8ie=L1բem5YچDE^ql\wwhl4F6n̰66;4S:6Sىk0z̦Q?^וbbe>8F/u:_nk׳:_ON;%zXٚ`%<ӃCJZDT1Jȁ]\Q/=;>ؔN}u{ze";`A?]嗇>I~瓥Ao☭k~$ռ!aWI9-Hb y3j3Rj&|3mv,ႚE5'v\v ڭo$4hЎŨ I g#%_\ߝU|qRWyҺeh|AMwZ9|1UuwǴp&W٧E,#y͎\e5;J PC ͮoZ64hHE`ΓB!nhT"Iư}nLq@*Uf?Z]"0J^q:KԬՍ_ّ>ۊ黃P%kV֎=u{ŁjtW֔Cv[WA뵿-!OO}M~cO)^þ|D<2:~a<~TRR;lsf}Ғ-9X۪eDC[p7W75}Ie^OxO>IxpPW4N{X>g'G7o:T?>Cq(pܙ,pp%bOQ/L (_v<"duyGv&kbW+˟ˋ_/3j^^ÃoOΪADW]KUiٻ& 27GxѾXkQe" N@`"b,ţ79z4p9W<|qIawg]3So_~@FG4~> GZVOc' J{_3A~{bk tUuJݽ9da"Wi]H,00;Msx*%AŐX%>萊щS%SJo 62;d{vvI8ݴQun€{`ݿGЯ ⻇s e1'gLYgjTvYObo-JtJ>Q+Pup0.0&d S|/dPǍ+| gxp`a'<$ƀW;h$YҌ/nkdXqlWEo0jRYF52!TXQoC0F ;YaaH4aH80jҥ&&2n:,9N֣HO{ NV@}!\Ʈ_)ݷ4k#~ftl6 trnfeu`C^e91M.371M.$ۭbMRqsx$),&*8I휛h{+I$N(ɉnzqh5RmAPSXlL`(Q K.q"e[QA 㐤,JF ʃr$ᄋ@r`0o`ijTKN٣!Թ H鈷qs?AT΁dD *nP@ϗ1Eِ7&FG`aqWܴ-hik+ ʪ i2 )^h0g<7eL )d`}er Xׁkjb?$x cX*PNK,ISWl33z NYYߗ'G(g<3;XUG{6V㭩!y.I%|T>JZS: *^Ő~=j<84/macR5u n)iEO0)ZG(ZΪ%FE%Ϣ*<=9[!,guGѫn PN׌?Ow^%6pMJnuRs!@/·LsOwP= l )ҕG4j8lo.czmRqwY7 &pW;ͤ` ԛ۸)52ɷYL'v^_ڑeY=kLgרE[O DVs-2g_ Ҫy8Ls" c׾`y.0n+gm`{mݠC20}Z!EgQl%f5FoF|>A"gLZ ӌssXKTO@Nz:<|.z>.⊝Q oX*;)q7&isFUĈl+e\mR@`o[og\ PpT@4.3V+I)q{VvRk:&X3vzc;+6 Vi=2 M3h^FSL*~zir$lhY]/L%W^~06xw6]) }̅4C7C BڡR͝%p)NK0Gd9jaLș. s91 aq/3D-<1 O4}{MQq9u17k3z@,j'j_\Bq#WE>|mjA (~|&Y;w!Y1?iE)x)`9e,Ppm< aA@ Z. W.Zo^$P6!\YX`X4J,{#N(9*AFfCZ}Aڶ/$l~$Y#8wp"f&jE4!٠,͊R8 #l'oI3oP3('=ͱ("Q}rm `\I3fӷzѢ[c(d/U{0iǚ&_ Av"cǣQ$.FXsdb쉵4%X0W>&ƽ۾sc4\[8 Y%ڣXkaUc,u5, EZhiIeDp &+ {0* :2R4(%.Q fjx"T`:VL.(,ҵ&D/PZW0XrFY\p.{el]eeQʮVW Tp2=, gg=Þb剝ϯXJBt'\OUh.nps5!U!:(d7o܉k3FuL8u_809%\]]O1Gλ51;?UB9[C9Y^Sw$}o4j볆^ (=6yC>^k*wétj^tF{q`0d6YZ %:N-Y`Լko#~hɔI{ D $d %w:Ga4.b+&^ X% ]1+Kl.\U#iz8ѡ _P B|@?c_'ӦqZfbɘ!>dL*|%w/`Xf+FkBhrLsZSr W+CtQ (ڵʽ1}kA{Kof:~߾Œq#:۹ab,828quNǤHͰV;f4EOQ,w Jj7Lh8X 60gJjG(iN$߉ADK)i0%VȂgZn8՜+.hDwAD- E2By׊3t a`M#S]P1ڐCK%&!15mljm R7mR7l̚H34@ dѶ6NJN#+kkdYEW"`8b~-2Yi;C-}H{lVwYl>O4**v{wưR2QCdXNaCP@PNj gTt3 ={<'.8~a:]D]g_>Q`bC-zpvbN*u8aVhB&%X {n#"]P j|{Whd_񻥧HфV٦c ĭ}\}9vwhu^^'q ]k ۅC8eQYn⢌~9u5NZ<B-Xr K!@BwC{ Q~qDyfTWS~i_1WGAQ:/oog0:?.w2;Q,sLEG<ċ'&w/'`O_.Moó翃3lО3X/_ϟn.?fCiX9jOo:F_^t;^4Ϻs?:_܀_٬ *_ >_ڹyQ&99?0%tO&A:aVo,и\g̳+K{#f))yÛFPnz=Y1@U]8E)ΒoO)~ Nҿβ>^ϲi1xO>3.n|yfNCSOO_x x5T._tz7_b;FenzOGq b c{qǂS⭟Wh1Yur[c]Em 78 Wä0 : U~Z~b0x:E&,\~ _o a)'M1aXIM7z+ |̗_bk*nj:UV&rƊoN=lLW[A }uZ韞ȅ &3NFQ3O;.A3nlZH|=,t1gV QR]obVykwg0B<}I r 'FϳIFϟgw<~i@P>QȒ B݀iײ$eѣ%I<ɒG$KvI<ɒ'Yr,2ߟoYR{ xqi/Hf5sL\,Ydɉ,yd 4 ׺hdɥ0;$B yG7ORI<")Rw"ORI(E-AL1DHI%n>I58LE MC+O')$EK')$EZr>\0g<7E?)d`\ HGN>6AӹvvP gZB&sfk'tɱ`snm4AA/ɵ4XYiʜQt0Pqn MQhB@~bVwY!1 `wj*%ab$34> WK\DtWx:FEn1cߊ499fKFm!$HrWZ33R^zUOUd&D¤aFc :F8aEӧm{cj41jD#=s{26٧v2ك%[“ w7*8]לG QaSMy%8IIGISH8chI icݺX wHZ Wf4@H^a^Jz^S|T1(*~[ϵdདF14>Dm@_qc$꺊y ^*XjC(D!9};'J]0,} s7O1!aLAzgPu)"6 (r*D\9%|zt9kn*rI?YW?BRsv[:]ԅ&)`I*5tO5CKdD;IQU7|sj|/6(~s29o(9YU\;|~g J~qrV]|Fk|r 7{i͋5?}9?W]%9,@4g\C55VΥy3KR0)jG NлT*n/fZ,Q<2t(-Uє̒UxNSJB~+. ŊkT:(=\ fֻ[4W (jgd"{[&l@Hp "]tJ2jG8pNjr^vl 5B[w뀤{i/,SO=;RlCDv,[䪞~y)x9aUNz3ZHpV0CZAc̤Exk*SԒ Z^n(V/oH6J?Eۂ$1,7XGBC0_W}l~17Zr\ڃg:M)l383Nl 1rm 2 df3@^b@ڳrn%vdAy~ 0 B(~nOmx/2d ;jcfR}[I;=BY4r Z})*iϗݯ2!:8_/Lͼfɬvnoodjn*jvRuxB9pIY+M9X%Fl/i脀?6|Q'[yHsAg nlom`@0!gȦ1#?شFxЛX L_Uban :,%Z-/g@PLٺvʕvIٖIJlyF Ҧp5`e E\xS ;@VUé&9j˖js6Y\1[(Ͼ6Ͱ&@Z&)=o7pفO赼w ;8jۿJVtR7gJE\\\]~jNkI']_4UN_&ʕa X>Mɬ.[wՕ0e0\qNj+D2C_JX^ %+Ͼ{m|so- vFJF4·>} stdܢ 4^h'2M=[ZKx9֪vE3wkFVwd_0+9}ӽ}W߽}bu8'^Z$x jg= V/ &j!>Kݐs~rOSZ%T7,0~A#, ڡqG+n^l+i2IIu}}sWKl@6M)MFjqҕ`"V0CyZ:`PRmUQ@3R3!VCe{!Z[}3 e8 vY{43veH:<Z R!C*wBi>8HNC[+!n`ԗi^\,vUϢ_d MΛwMD]~IW[?7VF.\[Rsb("XDžcGjol\]\Aj=6^&SG?*[̱.`D#sCO=.Yڛ;/',{qԐqLyIY0w%#ZGɤon]]ﻛcMA1?]\,a@>נV|!*ҷ>j7-ޝE ;0J 䛛lK|/Smï#/DZ̗_5|ίZ==ɘ*@#32(ӹ?s0rD`E͛(<˟/ݖ,ȇ^3g 3Q^A@vڡ{g2faۂ+xex& |ΟfV\ѹx E:9~䙍%jy$֡DMJ FdUs뽈EOk2@c`(;02-)#,"+V  21WIERVb~蝂PP~'NaʿO- c Xjur  Oh'U#'".FK~ \!0bsQFmL(QLBM{&.RNHZMb}+aJD?HKӈ&+VΠ41-랳Vێ1YEg >u jF5Q6qW*؎o\b O8weW>}.nO.Iמ_M B+u4zZRxXQzE7GsTh eB2X&*ZȚ/W2}#<6ti.RD=cT)MKi\EG6GX7%[dhb:cQbLͺf4׺M!pSǹc݌E<6v(" fF3kݦА?)PY8LX08~kW)1e>׻OWIi y%&gd6.L)SCT2U>3Uyssu۝j7:HQIk#1jG7zv:Y[~ٖ]HgZϐOq[OTc J=q3ᇊWMd2/`ftYt+ #q+'@VTFDfLD PYR 7URRASۭ}V"gi8 E-[[y}5[0#-l|,*(bƯm#aCQ0p0F+]9CEJ6T!79ChñZzv: .w5GۢH.pVH/; -;Ƥ^+3UH.j+t7 ysk"yZAΫqQFUD-@e͢цH3/:6[MkU9L`Zjzg`:dN5c۷. fEOqYJ0ڳȇ3)f-mh*64ړt >P-i!fHbj{C޹9%L4)W#r1 1W FDi:f9I hߠA @?=rԮeѾ''gO.gHȲ;S0\{nokwsvMIZY31Io5K ّa$HpvɬBPa;& Zfs>Rc>#v81s lWMm4zwrz>}= gG&R-W{q;pF˱`ȔIpL r)a Ͽ>|\j:`e= r[9cK 1%hh4I#Lk3ѮCc 1\з=n/ɩ&^\F6_Oȇ2kISEl H1ZeCs1`3K.)={ulSS )졚+J}%4WZYF? PRU`Jj;I;>D#G4ԞhahvE?jLuQ=p-.`'C= gz|wOrrjRmZ_,{{!B1_2ԓn<<_|i F+eԳzzv)v1aLDŤh;>H~'8\5k (<QyrLn7V$$H~:wo2k}fM *9;&OЉ{̣ p Z#zqĮXƣKOw2@(,DiERv}Eh ,w_3 T1L3!ֱˡL# ˄j8 ޵qkۿ"Ko|? Oqp$K cX, z)/9HY( X 7$7s 3=уf?bI$U6~2{΢)Ae J.&sҹ5;a)toM;F50 <A̛ l_c(,#Vy0j6s-tiU[Ea˖9kǎqjxB.+n[?mpO -uܡByGDq)#, ܭ4R>UA(ubT!1BsПDX<\q\q]劲+&P0=d߃@\ZS=qnŜwR  xY^gnf.m8zqKi=[3jh?.m%k6VwX`͵KE,0ƬȊaBR6r&& wa9ۻVfr[Y_n-rn*l~]sΥqRGUuPf|_홹\=0Ƶ*^^PQ;^ybMn|1*+0c6l#\,$Fg+6:֧蚇v<;\?D@@r, Y* 08CcTxW}k[AJ){f$^ *0PA,8D23H.m8V11󩏔*v6RK0+y[PHOKI>qJG9n=mSsK&|uyܪ:d-`0|`UU1&b4/AH?Ti@Xq8D.lI8lvk[KǾ6-8U-ZFG(A oBM&OXKƻ% (b t <śZ=8VR݇dTC]{+M+0#vΙtF&&Սt;SV@=*yv-!3-ܥuSjiIs@ ҍ/FU@euK ڻ5}9ϫ";]p5oϔr^ҡ-G o"`.F2D@$X>ҍs;`ívG毅kf>੘0L2+J?Z!z>ݩEӻ/Fp JHJ]2W.6P@]A)IMȁw}NN&(eKΫ#@ Pl&h',*rPق4C ml&l lХ|k D6w;sI` ܣgl?dqGq"IOz B{*,"aG>3*gY`N5[Xl[$C]P* 1n#:D_g-_}k [xwC꾷'EU<]*zzAo;eg<}k򶂟[O[Tpj-X-4!=˥?GI.x"׎w6hS,is@%!@*Х,9> #uI0DPgʦT yo4(䤢,(獬Vq\ϻVUb!Aa.uޏՏ'yi=YQy[28l*Hx :PzƇ^U蝼ѪʾwJI>qQ t;4g7;/Ar@*ܟ!|+X,gooVqA1G]zƻ4Asc+ h h b 0 \E\ +@8XD. ۈ (T@}%N_s9W5#s؅k7s 0"H@2sYqD)齙{j}˦t/Z#ws'h~+$Im ܢۛu~7\/uzp-qi$p6@B˹v^ig&L+%SMݪ74/N1VA=jKd`9R~Uxs߼/G2&qAPf0mXqڊJrh& di=ʲCʗn(6uoaf׊F:oG$vD L8<1Q1UD~?jOҥ6+OfY\0%9laIZ=isrI^H}$ĕƵGCښ ˔%Ho.^"HJF,=3}\:4xJs<>2ӭ(J)ڭ: (f2&Ng>PkN6Z+a$.S ֋0ObP?r> bcX7C̋L240O3BBKL  ,_k9ؒra̍7hjp›}ix%*@-Xa3$pLט,Orf4q2z+P" F02)˨{@13IKX"H$F@Ê>Ýf[j(%C"`}"A3iac~rb9 4@/ +F7)Dg'~#$ Fl-IQgfL6;%1!$1ԛĦǾTNN)R9Fx4ɱOfz5Ȩ'T}lA  f{tQ Đ7H` A%t@)73_1BKv$jsnB& VSuzIR-YNbF3[A mzjqL\&NͮޫG>וmb't a~ԴF\UB`ʂݩBu6p[,f424ya1}ynKͯZ>3Fxm Ҝ9/yr:‚S8%W }9O/j}9O/i}.9O/;{D$ iZf-7tuiDN)d3,V΃Z30j,GYƙgL(u|Y&#"x..AnޖwQn˦wm;B&4=mFeCE:{QU"CU}P"|l?1xLcN_z?@cg6gl1~(L~tWqQG b~;|p)Ok3ͩ*5M)hSRR)*QN,qƃG))To4"KWָ̌fx|Nxf~o?T꼁[xŐ8'~^B AEތ_"&*Mx;ɷNM"5hJIw\ ns > Yd^3OBkqYre9L0#.X<7Ã͂K8K;>}Fx֭hMmI f*:#3JDv{*a%oGE$%q;N/:sj=M-7tjiݑsﴻ9)Awn7O͸{Dz‡B|[}[emX{%ktQ3?ᆥ6pƽsWx-Ğs/VwAWC^^&t2><*¿տ8=9ន嗟g`Ó nB7&*} 6IPp, NJ’z@*idIB{iϓKY,geZiPaIL:VM,o?) %A$ͬ0X8(f )(QD3-"Si!2jc[] rZ L6F@X!|uCCjUxgHpuxcbL2@-&peW5QhJ| Jnr4Iq'8g`B3g5VUK{{$@0ڋ,ot{&K]djior愕jAv5hy8m 3J鹷 EhD eN:wlXf5~RȮfYC$?^eԒ\|vjKrSݏK-\]O2 hv{/=U\H}jr'!/.p1[(ވ-vCW۱|u?yuطNFZ7b/hFBqM).>߲n v -UD'u6gYdDZ&$$"7:,eLѺҠ褾u;Uim떌hUք|"$SR_FЖucu-UD'u6"O8uKFukBBqM)/~xۺA~ -UD'uκ%#Zպ5!!߸&%` "[* N;Xt[uim떌hUք|"!SL҂ɭ.*KΉ(NoD&W)崏썃wn"M+G:`kWgnLAӻE*P΢,O6WNYYm#Zj8N"q)ieה0 &̌$VF~73wH6uzEVld ߾Op;{'bX0I?WJ@|(*2LŔR|F2fʜ2*1(gAGTao06֨( ֳiaG>*Ti8kBE,.,,,nENdCj=kR9ў,*^M!_ ֽVBF|5B_|bG3\Dzo׫ww8| ~p㇐is˿, OexūXzH5i~W?, .]u>VO/oz>mrw`w, W| ]$$$ppx2\1)1dNfBC+JB4BDت (AxDlfƋiȣlw^-K!Y߿ރٿ տZ #a P~*)74ckVbsτ_cz!*l_-><{(_AR(g` q5 {A9~ݎMKJMVXpiӎ}VUKV.HN6K^9%1 f2MHx)I*N21vdqcY06D!Mr uWrnH+kFd[.{-l+c3̄h&Dhg&7ga5@.. 6IŃI# Nk:)BNhŔruE!ՈI(}D`I.w|xgwzKC(lc`tyI.!(Lq w& 1a"MF݅p 5mKO~sNKj*XJKH;j:,57jDiɎ0Q֔L Dq7ĹpKA6i9I`ZZ7 aJDmq0bE!v)zSHݸHu|NXh*#.e~Iiss$jsn- sr*0sp ,tCB g58.bF4UrUꋒrR_!3V )cRC\2d)La)DZ-c˰be[Mbk*{ C:]١?_h'V&FޥOjJ.өE©"$cΩ:ԑ)"3Eh|^*f]éz+Og A9f(q3L4<(03bұvX9G)އbvjiKs_Ō~Bc[a tA4a( yPZE}!&Bh1);b0̷LM-ijm^89 as1(cbCLQ;b1b͂kH>7qfbIa,QeRxYAo-@-U'l =oQ( *ZhZk(I/Ĭc2'3> OY9#v@Ad),>*fg QHYASELO~+gd|Ӊ z+~~3/8s~\B.|w:Y;+"H/1µLM^&s#~bFrF UOCb M׶Bmja-i ~DuE@rvo`cg%_.zYo&7'Z N&bh|7Ea\W> =D1d/וG+PPpJ g"*Cf=hŁw%Q%Cy^BRw|NvD! [܍Ⴐ |d%JmAt i$ EIY1(T=txFR3FQ#*j"C^S(vqpFEnBϋb#?حl;sP:)k%,{yuMg k$KiMB6<̜_|!r4 M1eqآķ? Fy~kO0uه?7UT&H!h:)YavF_V|b}qC2=\}F+^oMJއo@`t2Q` Lyi`rc_V>g<޽-ZU-e m$ܸM)%`$-Wn~}*6̦# ԅD}LYLHfT`Τ8<\(Ѹ-\xj(X@ZAv>e9T`[E{߇ C)`(lwҨl@L(jsiE84ÂSGtb W"`Rr5}1;fx+ćrZ4JUPQg~LY{#F0# Sr,$ἦ .T L@rPԴ]/o=JS{89N`0 ^aEFQtfV8>1"B¯Q] K@Dޕأ #gW~0гܝާG%l_(0w@&fž CYXʖϮ}n!HW-A3SuCZAAڬ.M񚌼!rC n<fe:χPݭJ`L$fAxB,\+ɋ懅LSѦP0]WD@lvsbE ΋ ck}aolPf:s6VWu-?\\᣿ٸLT~>MÒWe%m݂=?lM٠'Q{ x8yY༺h..^] rf7_D9th>bv&GD?w{u~_ݷ]͟Y"l[Ȏ͇ ̖F[xp?\^jn  h 2N1N(U4!B,BI̸T=^,7dB,a9S Aƒ] %tKs-?M.XNԾ5#56Yjt.aPX.9NMgR7MnheOLw9f K$*PV$?8' M6!p^ۛ`mV(݃v^F *91tqƨf}%XpVcQ<e* x"xpX&I-TΫ3-L?O7#Xg8.3]ۉ+sê? _ t\CY8ܫ~ ICDio~A; =n 5zh9܎b:YMm $%goNX{?l~X.v Vc K>فϳOn3/ݏJ5~,^J6$IsiͫVij7s< mS͎kX@ӅGtG`Gd~5pvqSƮ?>H86/kfVEs1X͚KltX;Jg>( /.E.i֏CɎXr-ю5#fCs5R0M4C $3iYdJ9ɉ,ӂ ЙaU2 :;gS_|R!Dj}-3#ӽ>3/8Nސ/LI/RS22Z{9%)Qly/^:'L86_- nbz?9nRk`^I劲G%]n|doϾOqFn.[ͼqnōWTUq"qgu5wfu|)n-S1(Ÿ{_xtQ 쏣owHq ,@lvҽ{?:ރ)&v`D%T=#bVB,2`A)d3F$E S3GlU33eO:ŚNn t]@¸Bjym/H;%1ϛ҉`:d(*q&Ng8Q "AiKL@KWT"޽8 CBN%Ӄb*jtoE|g=eVʖ&cL?^wtQBB;JB/óZw|.${@!sr*們 \Zr"3%ʴR\AǕ],w$!_ȔСvSn4w4n=tO!7vBBr e5nrp ->&ڭGI"݂n]HW.A2X;ܯ=@SzW' H?J}G|LI 3No㩓\DdJ5JdFLPJScA(W*\Art* nЊBDHKJ)>ESާ$ffw6y9M KWrd*} B!'V/>>'J5&/?O$Ö'҅|"$Snj! Fi#z:\Dd 'qnJ }Gv'+݂n]HW.e AF#( ,ue^U5%n.+ƋmJ..lVW,ϚV7\.8q[ߑ?^AV}K#nGVwYuM=&^5MGaB~@5jАUשS"0AWc!qHtqRo^ 3c>s})`U暏Vmt!j/DS7P]Uyݾخi8^mxdfw}%b>铚wI n%sǐl/bJ)я>}X5RSV&9Y""̏7e:6=ym&'- ukxu+誯[I-A[[sowEqJ+NvꏋE2N(\.]I5au|Câ߸%. tn- Rtn&#Qњw8"|n&mcFQ "e z$¸s Jr}|+ ̀KZ$w x>,G!^&I t꜀Kű ^Bg#&^ @RC\ݐI1Y$U@p1  j{Ms)qBWhGҺ C8+ˈtdj`RR_X;e!U~n>O9pus蟫n^ѝNM[+L]3};s؋$)ȑ?$tZ_vM%LGSC]@?zUd6hz=iƒEЧha`}!1:4ͱbYRt8,߷A]&y|rHC-x; D9'6FY'rI,aXaY >Ky6{A^[p}l1V3x)AvMM5+[}},5!)TH7L)Tbƌw*$4s)S4CXÒD ) fUzG@n˺p׺73󅯼^:\?~|I?y :!ֱAi`5 AFˌ,'f}idRF5V4I*$ecNĪT0)mRwFcHPxI D6B pcs'jleŬbh-K`УB6q!%'\EyXj D@eh0,cأzʦN4qݥ~h dq#l97NL3FHь)`6n*ᛑI%FK_1$R~tg6%) 'Ȍ;)Ao$ˤ 4 ܇9W..!"Ӛ03Βԥi(#3%10.-ư'9H Fsj!,抐VkG`V]yo-^q6&jc ;pا9#!h"NG'CR.ӌE7x9.Aϟ"br!oޣ~}5z C7 ^xJKY4m i iBBrUXKщFvEK]2Yc޶vF4T5!!_F7oCi7ML A~v_$9xihDC[hLњ#PkB3Jt><|[xu1YlERpQ<\ qrbAvL3caM{Am̚h1?x𙺣 ㍗w3F-M/d[ew1G?jOIx)>`xu,Mo4}&?. X~X?X?cW1C9؜/G(~XӺG R Tˌ2nfvlPpjO|F)hdԔnMJ2i, BoT1gnOhx7!!_FTݰMBB, D'<=Dƻ F#\DdwgvƁw NQz^txhw|"Z_$ݾC}Lze;땡or̸IҦeY yKt.<C[:GcvpܦYR fvH sD;o!VZkU,?H[ ~ޢ \Y`1 }TP:Zqݵcgw߾n}cϷ?^|X%7 ߎA^ҰfזS: (+A2S yC24AME> 1ŸaSF"L# "1jQyCE"Dh@,It@f2'$%yCIºi ]|-ci@A.lsX[S,Yq1w3*E{}w/aVbD-jy].9'x1VKuʙHd`/m^uLvN~@Iq|8t:T?T {-JL +8'eNJ'B Mn V[*6QN?TT/P4a"- ԊI!ԚCRobyNmڟ ^=F?<|/m1}] ,_~!|9fYl?|E͓!KwVa ZPW}z:_6<8˙ >N'd ~u@2hp(1iF3,#)X[錥[J2'`9eDsn!_ &ק>GZq驾3GoOK ݅t8T*n#z8&pG}.,6:3BXà3_mTea;IÙc K,[:=o^9D *[E=RŽ>p/v(xIb-HobC ZQhd<.7k ,î(&Z9zG`1T\79,6+ 5ǮS2i~_r>?D"G8-1h+̴74XxMmz~Npdԥ8l62$0(COP ʄ 'vt=9%e_#Ky?$Bvu'lS;$s3I?"PU{_|s+6?C6XN?+HXG*BkTK4"RMxFpચo˱Y,xVⳃvNF kJ? Jة;54~Mt"1f,B UKeWyS@c)Zxv|BTwh7a^ ڪRz#3i_4=z¼+zO3>"Vj!W*lj7j>!HN^ 'p &NM;-zĉU\ethK ETiMZmvJDP* !DcO*@.%iɹt/wCNT _q1ш3)Oo<^~)EVrS٪}#1vQp 8ElvcSS}oVG:Sf2{tD 3k bbgyZp}B_3vOlH0-8-%#/|e9u  (#'Wן4ޝc퓊Oˁ%_MNw`s {"ozQ{F%>QP` kݽInFI)YR4bhw&9wCǬ%诽=Io6}B6?x_gSk4e((zozM&=]Mވt% cUivYVg6 o%luY?O^CVpa 5؃޹mhYm{n.-~'^9E"Oq4.`/*[yjAkU{s,JmcgUэ/_Au+^R8J)t~q DR!F_*.pr`Jqi~ѱJXi|[=5-Q(UTjB ԬtuG |jzx}Ʈ de7]ZRhDCǮ6!!_Ȕ )uJB3z}1gdl>M7'؆3}"4>~{#tU<)5mj{-PRk}I…SiQވ+uRv%"ua7DryJˑ`Vi)geƘ!%Q"}{y(H!2$52nHQI˦"9*%IG{>hOޛ3I_nTmC*o 7]-g}_ü_.g/Cy';hXz [Y<~Zx;נ` i≯ԳyG`|^owҭ5r8'֐uxY؉~g̩ J%@0^0sc|$g&^*jN~~__F]q7!" "o>K0Gr[߬2}=x ,}3隂oW_8RRg6l ~Q?1Jpڪ`KNE9 [@RfNm'*Yj† <Ǒͭi(S3J'N9*RJŸ<.3Z`*#+PJHvŪ2F9J1^M҄hUR[# 0ޜX5d,d,e#&ǴyN_oILY7 HUm4%toY9KSńawShL'Re""],4Q[-@m,&oМ`M{-V&m`][[~<F3߸arӖR#ta:e=i<%å,, nYiO_[cVQ,4;gi0* s~6G:kgwSs<+na raεĐԑP =6=K]}$.OS7n[t/ (]l2qt|W;ףC!V' Sw\m$KDa6Y!jVTxH&{j|!ju\-_"0$!QwW*9q< bh]Zqw\< F14eFo-A΍ފ!6g*^`QDqӰNE@;&&K\82lJ |,FNdqF-I2I3n^b}Y^eKV)-T>m$I^&fg!Ցu9BccÎvw* uX(5I[R"xQ hj!2̬B_sOUbZGs(ɑ ^h 0>F{aA07 )bhi 2\Z8ŹAxPub"Avw:1Bryj~5֏b(ir#YY_߼I/IL܃^fM|JޮS"-)17vgaGYpV''+er5I -ɀ *kqln1۬_{׶钞Xnݲf^ 4N {jg$lB")M G:jPFSALcpYty+vB !H+w(7(guvр@jak@⤝>;ݘZW])?ʪ-FEnLTm@NE([DŽ4`܎T$ODӲE- ޺ޕFK>&Vk܁unpE3Kď%/}j&樂i@GDB%thJd}$dGvym7 Fb0EG5SiCOT M~uy}kSF im@<@ьzIk&[g\nv(]>buTkybêA*Fl&O::rdGXL暄{8$wfup򬭸uy0dh_-,bclSCzUAlp#ږNG#`zI~`F扬۸E͜j4ƭbTwq=SJm T!X-ʵ T)x**N9{|,k2! sT[Ξ*H _+rFŸ*r "/ǖ5ሯ@,|}{Xl7<cP,7=@Tr*ws]j&UcGxΎ7):F(gCS,h~)^Z0] gﭻnEӆ^u\n| Nwe(nɳ< K.sk@)OtJ=2}P0S[ߝVrt7.q˟"9ByZ[d1SO|ò˟<ޞH'ޅ=h2g樔n9[ x/GwP|~_1\rމ5oPi@‚P6H+;qZh<%t'nS8>;~Qܾ|JRD%«}/< 7 !*2{a׳6 ֘:29K*(i΄vӊK+h,MaQ %z&@8){6yths:]ȕBˬ&b#12qg00/8Gu9X:fD:d2{ۀþ0\}I7kG]J`x r֨?7 ct:;b%$'ɮA%:{]@*FNVr(:b/BD(.xwևw+`dˍMzO=!3gL0 SJBIW FS;jß.%RQ~o99A, fԏ"@/DZ5E\Pi`c<>W9-W!ڬc t*zQHXB!a5P2G`@apDII9k"=QOxʊIvzU뉋'^E)B3y@2SO,s6nޓTÕXpNC:x"uj%/!r}+\x 'Jh8&_V͂9`W?ˆSRkqX1LMj'2pwh( ?bp~<7]6ӊ;ʘc)3̀hLLqTWa!OG{8Z: ĭǒ!;gfiJ< IMp&kǯ3=h,giawI +Lq2b~=$6;qcH/1=̾f!Vݶ'nJ58H3N= 2S@I -> 't#8p&[ag @@~yiБ8\E)Uɇv4,My0(4aݔ[l lKz0H;VyG^ 50?m ඁOr@ .M=Di!bg]p0&VfzG@$ޓNQӖ`FQPRĠ!(@3ISy] g"O8Wsl|beR@@8*En %0Y$sskSZTp4zTHyfRZz" ;psDdID$rΡ8YlM|TpCfeF]o:Wa.^t0"4 YSBUa,q=R0Kt.).0"f8I@$v 0]istbX \LqWKi[rPO^ > lܨ=2lj?g?YPngWWWKïPvGy^Q _AqX)}tLwi3{REeh/;X4"DmpձxIy;K6ڒnb?( @ЬbAd5,CD)wz=@pivoĺCUkܲ[[i$W9(`9(uL J~{?>1@'wLj'۾싫m q4> Im {lciANd>g> riZNl3bٝl\i9[ؖ5$;$xpE32 ZiQ %3{G$g<)ie:[Zk.|n` m:]ܼ^+wKb#I4NUgSMzT=fh@DJoY2ɔ}l@[9L18o5~DRF9(;w+Dd&r2@d lc֝n`rN ΪL]u-юg \CJ@Ye*}&_(*=2 (G{3\α_ĈuV})IdJcrת"&q$ϯ@Pw}z^̡44?cg"+26iyB"WhPwѮ&Rq9 ÚI&dۧxscnyM47},rN?4CcSe)Aހo` 0"0>KuK5Ё~N1E 5X;b[rcKf-LcI5{doNF؇6REt`T0da E! ?| {r^˞ޢb?ͧWRx7llCH 9 :ߢNЃ}{1\a(5kyJ:&XoQAߋ'Tɒh1u&;'#c;gQ;>5ⸯ YL6JmMsqN6ݻȢ qXפ q<'F!4kbpp0Rڝ9Y 2{ύP39;e-ӖyuKz DXVY风.zɼzfQ(_["k뽳Td]t[A2a1>=eFS4H ,jmdxC*@)$edǖ0jQ?%nRɴ` 7^+,b ͼ`b5F2ii?eӮt IZiX򬯮h." A vhx+6Rhh H fJHǧd'i)?o-E"&kTQ %r |i"/ii)jMKqC,[&E>Gwu-: 8bȮiF]4BhC4V/,YQ6KNɹᭆc%+eC!JD{ۯcS\GnD duR*xUlֽSKCO<0"\%ټKEJAl<]w2]a<~c XvJaNBšk#yT*IR])HZK-Tt$jGhѱNWS'G#< !$-1 ޛ$mv[o ؍b*fIA: i7R@(':`-!z< zl}`nKQfKgSkF=uM?o;S᠒pÞcy{7oswj-B|šWr+}cwk@B gwl7yP::Z wA8 [ ;G @k#q3u1b; ă*;C@Mj$qɥ~|wbFJ!03'0ׁqR(۹lީ(m[锗%P a +`dX0m2X6 (u1N*@`ym!gX/T#Yb GU[,]M1UD-DXS \B$G;{(zQ<>{&&:d:A(P˥ﱭ{t3kKӲl^E 5"~XI\Mr9Ha 2Ѳl8 A,ǔ,'B3`%cѾ}Ap3Af+%\ƙY&xvB( u0+;a4L?N螒u6>ߦϣX&1IOܫwǖcʏL QK'~&ꇰ.@<@p^|`n,!ogR74ąGm)pr9‹ֳ2W2mJ_ɫ̜s,eN.a_JO?t[+N. oe=O.UbdO޲lMHҼbǭHqˍɇ9=g ӕ|r63-h[>TzT,v濼2Hl nY=Ě%( #|a6sJQT>';K͏,!Y i856[䧌͆T#g7\oˤ.MFߓrg;ڌ.@7<)[EME1{^(la>N0i4[Xn>)f dYN0&D&D '\hdN)ƭpLkB qR9շL_э#-z"nLE–=%\rPْΜ?l 1ض[B]a L4>X˽QqWFδ"iȢ`` MՅj&ץ% cʈ7TsT 0]z=98ty,!v"[g:`VW>^gKbȇnv=׽xn/{ڬc(O:K/(f<<wkW1J]:}̭ys,.PnIVs_@8sʕݓ|"L(}&6溶ӏ+_/3UZYi./-J0.x0pmbap_& 3Ȝ@\{tZ,Z=3b@ܝY"|N>YY p`sEzd 0Do>vfx&a#\#|_Q~d,? 7wu %",5І"VA!+:rp]O_?uJ 73T e}Mþ}Ǘc9<[^Lzf +a M=M?İ6ő:6mj52n뱃nk'afȘW7郳'ਖ uރkdgͣe 1^Iތoա:La(INfRj:{^t|'n0Tsg(h[xӎAzsAaѴ3,MXdz-q2osD+:Q杣 S^ u-/~vwȹܞAĸֵy*) G)OU|focڧ:uJ"0"'Zqp0>RJE!]Kzw{/GzGjjg:P@}L)=7FB3ZFQ'2i5Х6*a&D0mm:F1-Y&իjȠmddP%2(2s/scM660uEB+>iC4²)]٦8⿣]]W~Pw4g(Ɇ7׽+Q z%|_ ;h~߁brq1.+9%@l7kpφrG0 M(^bHi9B9V뻴 cvu꿮vxߕ{s:bն|6[c7_v+( 뫸)& cm ?Ky](>Կҁz ][㷉@VIqio i܍NM(|_$K7r?p($ڱtO~~-:mwW#s<:L!!Է _/ZP†FqyXP?$~IĻ_~?VS .,3CxA^(&0Eף2_TΨ p5RT\܈|38_0UaE]\ criQT3fv=E#ê{߷zXl)[[ boZSDa|DRHi1j9^U3V¸r![Jǫ|ED&QYsl8-&9@V"U/Zk_XҒ%R~g(. p*q ZGyesCP%_y:A.3\O=Ց&J:mkAvc0 $kr^L$,C Xf#gx0> #=CNrG)IjD7=6M*{VMBs_5T\3o+Ш4)充4b:qcab"H˲7|>'|y>Y!$"! zg 1 Ș ԙמ~NyW~?Iu^*]:+"3Ķ&ʱK[9yEY>M|pKrD"}ל'<5ɯ95UJ/;I>v2PlT2>hl dHwHw|K>dH1EpQ7\cc )1!c6 r\wԙ1J4i>K+oRxq@5 #+Fc )RH# @8]U-{?t{Uvnh&zW}޷!SZ+ZƜ3^R.Ô d^s3(ą[c@\iֽ֢Sa! 0?l1+,1@#PyX(!_f(΁@E JGʑj{!Bmn9nE?X?{T/^>  :LR`DzXaHIp2Ro].p PXUZT@7:ENƚ!*E*YEC\K*RR]E5Qr9NFX:6ƔrgaT c~]/S]ii }P\/ÇAq^f61QM~ߣJ9cX)ƻť*M!k.S “S2|4\:) ĤN$884(HwM辈UY">nF^ aDx@Ni*8Jܤi8Bn BJE8y<O92XǽA2+R]NTrL<9 =:rHǿq1a_tsm.L <5Au ZQp_{na:'Fȥ! bQ!Pᩕo̅4¤t j݀EU uXq[F^/ddƓ5|^?&~ʓ nC)ʷe~ 5:: Oz*OJ+wT^* +Շ[{Cu[Twjp{ܑ!~Aaw‘!@5)Aa غ^sKG;3Z B^#P}޳. M,=)ܮ]?:s*'TOc|UG}Vv뿪>Y5^UvW6Z_`l&PeTj Cb "Hٴ"~qCͿJe1NcIoctYw:WʤM4&:)bH@I` aBzi\`Hk枤IaUZ8y^~֟&n:Fӊ$]"aku]dpFjˎ-Ϳ}M*NxW W+lHizw(>rrH|'X&!9D3Ҥ\$RpI 4$8TyDNL5[ ]z2vU@KDK_[I/gY)9`|d7ea<,?v˞~d21OQeofr}~1-?4M,ىkXvS(/.xߔW;v\Jd((./xzNw9uOXqs=<{Di 3HKWoWys0/d2*l4|3͑^r(ǼïloUc7ȟڂM$/ﻞgd5^i(Ieɧ=lĉ m&Ϗ>,2 HK=Y'}3kz >A@$i>ASTVI\QtZfmrYZghV䄜cGx7jQ{;Fq_YSG'nv+ X4-tbڋ]іNȧ.a֟kn铴vv޲V&Vxn0JOCYJ򁣓U#tTmMMhgN˦7o#ضv$ф,S@;diY;zXSNmo9[M:D8kүog2w|vJ2Dw% fPUp<`m\I2gG=1宝urRik)fzhv<~`(Eώ H%eٸYѴY* f.s<\!tф}r,9:8a9k-Xْ9)/8' (>˴|Rl9"y䶼}Y;r{lНޘcr[L=2$P.*{z.[;2GL(LEAqN7 N$e{DB+Cd{sxe@ܼQ0G/ yp֍iejI8Fah-)t]9fFa>VF5ʑS"7 G ɋc1q4 "niL+ɏT]VK 'NteC_6Iu T\vl)M!.{E d6;M؊ Eӗ' _^iSfѠDͣbjKOm߿{z*0ȟ&nn'x"wO jH/I'C;/d!%]. jpSyeS!%9%M6^r"AKY+IU*ū!d#i_R$nj,]:=hj%S){f27 &W/>T i ޛ M@tGztpck;y;X:Flp9FOG2׽2rDf)w_~L4D{e);6.ړFxV170jIwцD֭/SxW! y !ET_vA =.A'Bc ɢg^?"Oa21og8:3HlJ$$ad kB{30B&< [7=k$K=?⯟kdq+O&}0˗];C7m?l /Fo#ŷ"vgR`8D@PjrAr3^8S  tXC{a8;w1P7jES3MxLp铐BiVg2 J!̿Kzrk!JJbK队e;A?|Q :I3yD醎o}vTݹ]_t{7~\zmw/04oAoӑO"!яCÛ5uy?Cn_t*~u:$y55Lw-,pcvRL߾&WPr~QOd2CtxݫJM)y, ~%%; XƔxL)B8A{y@L%"$ TZXK/'tE> zkPy7|nv,{ۋ:oە ~pԹ!aUY B)Wx˿Ge?Y8 FUlOB'>SOٱ#xvY@n, c ut=:߽[rYQZm+.wkmUZB6y[o(jy"7`7%.; Yִ\ҲXy*5Nu ,yyű"|'gk0P`6f84ds޸;|-M/_cO%ȸP Y}2_'?8gg1^6p6ͦ74ޘNG1W ]0kpL?QD61nt6];_o*\</Fѓ1^Mgt<-=GͣA;Zx0z.5]%b'% [!f^2eo:K7093cPZ 8' E=@g\ : !Q |,.pgR@nj`^'Ɵ++:_%"yH!^'9c5MS8J}" ZـǏ U ^x=.ř6\slV*lt70p_oc>/,Jo'a01?Y{od4a,?G޽]|)<7exSs?@WEnq%d:?}A!OL3e3l0c7-c#=~;]e) 䋜˗qC# ,ERXu+ z P&jǣPaiVȤ) Bj9ˤd`C1#Mʕ \̡c0=Znnq0Χ')SJП&Ͻ^ut;^c'[FLx/[~36H_AOӸsŸ A$"PXu^By+ ˯ߔYq}񩲟 F +2^0eꊬtg ڢP]ylMX>ܴqD mčQVFhLM_ν[K_E]\1HѴ bS%Wz~=)'ܜ y?>9<&b~C6QڻINLߋّeW{OW$u$#!H?xjon8Obܾ/4L tF$ϙ^.y ee9ASѽ#,r{شmqt~h畠rBe]rj*X$h$^y(fO8-Vn}f!qw'<~x;y<͡ha5GQU qy9{ѰAn/y9}\孜srbW ;OFcWxO 0J"-[m;Pʜ ơplu'v`/{V\uE %N-Jj\[2z#pc:SbǂTZF'[(lI%n<ؒKBxH`ExUЩ!Xfͥfiq{^W K',J/1R 5+:Ohz+™K(ҢW O #YQ䫄'Ek)1/VwQWxӿ'??՟^9NF]A5nGp|'~J{CO?yCx77Ηq,.w6ë~g?m. >Kpv3ttustٺx᩟»:x߳vgtxOfMOi?nZsn.Xi_PpXNߤG!~>/zhx=r#MbVPw}] 'QH߻Wn \ٕ9͡.p"-ΦM{ucz9I;?Nes02 _Lyo4)ƍC;bL:iDQPs=ҕآphե:Ǝƽ @2um DCdV Lmj{R 4V(% *lu'M-3t:I^S:%P@-Rmlw֨Lh XQEbPB[*9J+Nt[.+݅t1oJ}4 J)IB]j4u:V{-Jw[ZkM+Ys} <4czk-6iQIkh]R}4NРprXOC9)³,1V*YW³TڄOZcqY-,#x*)ň6+A-Wj !6!B8)poJxn#xƍ <mh>/_G(Lbc8K𬳸= q"1yvD!Q.bW_ѯ[&do}#"؟[ rbL#Rv?oFfJ"L^=91Vp7%yW d]7nL4P3y?t髽|gOM_܇??v[W‰_Mrҧg\ WE/Vg XOW_48=MN,)r]koG+nø@`yf#w?,~ZH‡leV)iHf3Cl 98Q!ΧSY1W5@&/ eCk w;!5etlk )PB;)I0O*Pˍ%/S/%3cPy\PD]1F5<[dNUe۳o~~bG2{ER~ AI t +qB?})y⫝̸o%%l_ίB,_{~? Kx` zv[䅡*:Ie@:QX@IckB*/p,5+T1G~jv{q8N/ʨاv edV{[Y40n!:ΝAQF0!0%I§؇WJo&*w VuQի;C~Zoߧ9K_ژҲCx k64-pJ#ωAS%vQK,#`c$ ByP,Fl 6+cb$ 3_Z$~d&1D*w0+ݼU>v6FN;gűZiC409I(^k )c1"Q&:(g{A lG޶ U1nWx6dt>Hr0dy{ 5k ]+8*T.E=Pdb(@\t h8 Lf=qyWng:ҧ$ @rD-acFAyJ*9Ց@h3F:a zJ65d(͟Jaq+M KI"bBH4:b< D #WsBRKZԌ ژ DZ H4_AĔ1@#'TN.:Ne Ec&T{'Th[C%QV9@tIݞ&`g?/{~E*RyUpT\q-?Cޜ}geA^+}= G?4U* D{z~3Ζt_^G6Y"4$]BzZJK3u>SJ2Zqm(xOJT\"KX3%{Ӊ\bL"Ѽq%x;aj;I'ovJ26`7]Ӭ95[ߵNMzT= 9`&ga&P~dQ,Zbi/Wwzzo/-[\5H"W2DRx+g}t 77sޣ,ٰK+G竬Lw'[(>,6mN7}=qPY./.1gV_)rh  NI-v}NW"sP!{K>(ʺ%F\2Ϛ12@0T&zg7FWb},LHRp7O|}Cލ _1p5 f2HMғ;1{)m`⦴)fuTTO< WƝ%&@^}. M%WH (W-~rz~Y|S_ݕM䶊?l=}GQF?Z |]o7se/*تݯZ]xV!񡭏DJFwFu$D%gwqbଵ+ӝ<D tbt,~H酶2R_l.J9۝(lRY=me\ hqm#Ru@mHQ"]hy +RIlm{? X$ّ3^xI/1no#qTBQO־eK4G]S2AvKZK]3FHF*9O~UyrzT$Pn#s(xVB0#zM[(>[bRSEw>n-1*;A(F8#7{o.Mx}[pL܍hD1c>:;)h^HG8deK)P&)CPLcUM &tق]N~^[Q/LSW'MZS^ !m;U[݊iȤ,٫v `/_Ӥ,ګ~B80-զy5ҥąSO'b6ӃJ@!Ăa ̈́+VܕhŖ:۴q%JS]Vi1-x[z `:Nrsu!%gQl1q~#o< vd;5zmje(Ch.8:+0~*k'/a;2jQ7$9 ʹ&w^w1Ίk3xP` n$(8QhccXrmPEueb Va3 AGrZndrl4j"-D2@P$BSkÌ rfTɩ%mđGgH$$}:V~cJR +;)Qt-#e)-u|qP# w=ٽaaIbup˄x;oɈ[)dJT;;@vAL+Ag!*ڎn\˧("`£E .K-ѲR:E; i9fZ0 hX;pe-i׭(ہJj֧ [ /|~:՝=j0fGtvѱjzeK/{`BۊNwk7GLz(BaTֻ1:+k''g5xn JXQ.Ãm/e NXCV;kZH x}ySggಚ;xa7M%zEth4O9t^I5m]dfd"ÏȂ|059/"E F*{I4X/mTNjoH#MSxX8S]';?} zmaf<% ?:"S-]4jeO_?%f񴥎'VgLgOmjyDɢG'@Ya<"霯`ҭf Z- $:gCodM xX9y5j i؇p[DtA=Kv n䱡R MN:o-)ZidDT!=%,/ocj9şMƻtV yuрО[Â),=<Ә`K@* [P/HaM*"XZyV~\c r0N~Shk"V*[N+dHi$sRar1L`H4W{ )-&\==,Hf| p&@;S:+{НFLT$BJhИ :Q-- Vaɜ{ Ԃ9)ؑB)@ \i:] I*oɝgc4AA0$Zj' ]i80 Q ת&R>& v DОyPm# #D* g7wYvlAQPH%BrB9bj=ŶT|B~JEpvTOqQ![}gcd&GLdV Հp9SU`T=-)U)hDPWH(Q?{@F[=X\͊8R{+5][ޗ}mzPpA S㔃X:br'ẐӢ@cq&'Ti/zu13!4ZqJm(wIAI_H xG-RLy0 2(K;gHk%=e+_sN(T5` (E `I0Z&ĂsxKq .i?C;.4d}xp/8.J}=> g>Bu L 9Shh>7JD§ \!B "$c=cex6EЗve"Im[\apd;YRrg앴Jqt.o83vy "Et+Z`wqr\o{mK@@D~UG@=+C(LRN9]xtE%.f҄&"I D_ eD109x j1":2ޔY 0z˜LsIUz}41 ]Mb%'X[&Ed@ѐ#%F tGi{81mA>E#QX  pb$|os8<"BHN`kƔ2L(F-:8h) kI 33[0 N0s`"ψ6jfz6d;T+B+] BH4's |E`gj{h,4Y)nTD֒"jOm޲nZz -U1&[!TgݒnmhW):%+Nui"KGn2Qw4nGܵfݒnmhW :վǘڲ?6;>_M9vcu51f5Ÿǿ$\kɘ rkf}8ۘWsV< {kFh_w1Ԏٍ,ʸ,ZVT֡5Iw]Mx7;wvdKFf-7?6]}hBȎ,$Yg!!h]!h$ H '2H^#RLdT~ JBV.QXH}OQʶfLBPiL#$aN^Fg7\Ke;w^wΌgן#5%WZnEf^5nJ<$۵hvZbܞ#Iׅdyv1 p`0*V4ݫQLHXih_dp(tp軸P#Scy锣+OzkO Sg|"dYMz/FAߗNTO'w9\ WȞFJ- @֯Hrd Q_DH+3#=\ Z $I $p4}P$ \ּ1Ja^UPgBzc @܆'T5HvTR] ͛&^* .$ D(GvQY ׅO9n~:a0zg7s`݄߬c-WHJ6O[w/?|nNo>ݖTGba\},Ho!;[$RQ&j`Ľ 2 JDT'PjPydtLm:s&FTp ;xa63a}ģʶ"LISzy3ϲb}tm&q^!#y[\ F'f6}GNQ{+-S'8d<3 eZpIs)ə-\eQ&`N&1ˈW?`73~-u|Y;q~JΡpVof<W)ygUImwFW5;q( [gB)nywQƿt$zDPߗHJ^L}I\T2ciX r[娻s5+SK w>B#AFWmn +ĭ9`~6tLIs(.).#KAs1k#lAY`c 4"P3h]E?$IL}ډu)ጂI- ϰDY2Ʊ\rri6[C+ߖ|Hԭ 4NsVU[U[&jɼ+{{Fa ^f[)gY"Y2"a_@ս` P7_/@X,uWBe ,OYՃ箄D9o399*dwf)2&6OYԜW귟gĽ'-rA;j!h_j]+P/<5C> KvQVIr*vWVYlYX%.(,*ʣUi e; N*ZGLTlى÷M1PRq@r$ﯖpt-ӽYOF"* A:B4uף9ܬ9,hj^ Yuр\g );JħABf:)ҥX XNBΑ: SKOEC7{>L.sL_;1o%mavU 19{1m$@ j1!goJZMҙu ˖!{rss#]Q#(C\?v&'CJfH)f 1lV6َwp)Jᣮ.?tA"K[Cp\Y˶\+l.V].9O®#Ttyu+٩J-78O 2Z%c̘覕Hv !\!:=6ވIfR >  ^WJeS <p:B@;b+jB\-BNS.:RąP˰?~(ϋܡLJ3&rhF,AqVƘ7X;U0cgQFn*^&}6'Ͽx5 PLcJO*z6-cl7E)y6@ȅ,}nj^4'>gQR<Cbl N)E*G^D Ms²`?4bo@>^ވW~ ʙRw cXI2q  j~9}d.@qTX;=p _"GH!- V2 |W j6l+˱N}ѫ!Cl]P?-Kͮfٶ"yō"zjQ@@?^ӡv;`̓$\_s}U0WUs]T[J"埕#>BV6N`@XȈ2*&K;[)0TJ.2}?R5cb9coh"tx(:_X6 caTLcKdh VLJb3F5HA4du`w00˂(H SvsL5 5(KQG5JJ[ow7Ϗ*8f6ZS&A80П$Sl'J{NU|-%@I`G)z.-|$"$\})J: t*&AM[2ˀEc/ أ F"S*#l+AxyyĀ [8>VeCq 0qyYHL"Z&=. uMzΓS\$Zܕ0!˸*)rut7Sx H*C$V*l, B[ tr H) @n׻Un_l4`ynd(5UFK?>8ˑ.SZ)Dž"? ;Jz ބGȇR0\U-PFT6 H01I&18!R2lVLo("aC) aGsjԒ 98(2BNFqLZə-="T RR RR._*F*PZ y# _][[&-G C-S=EIk)HPsHr^DCod( =J;aRr  6CԈlO &'f(#۞ə@Bf(+U~/QTRUtP%e# '$x59GT 9Hx-} ~ vF#GbMF$џprHyO}EV+R'7)>r8~B|ڥ~[EK]~M'ڮWOFlq5e$r$h{_ aq)xLYf,P09IhF| B.%죳o)HV9iԗ 4mI%p&}w-n$bS]?#U]34&E68<;DV;#vb\6Yzκ;g]R!P"n Cʟ摓1l4-9T'S%Gv]Rc( !$R Lvrw_"UCh}ٵ=K X*Mx.5xhR` f<4d#y\-&wz֩%lZMo}06~_51^'vC`|L0WNhzdf=>*gI,䅛hMAQW$&%5GnĘN;R[:ݴDͻEz6,䅛hM)N~ǹݼ? A#ŻA޼[UHֆp=ئbW5ĚG_ReQv~j4[-l[Fd,J %n3~q467Ϟr:}\'6-jip[-|gR:R҇E%[RzcJ"G9G/䷦V[z'3›=fs99|`!U7Xہ>Gd(YDA-t(B?mjEp .#8> 溱C$901NGQ~#(VO-3:lƺs`f`@i'E*kx "SPS! Q% {oR^EibQ{2}?cTA]"/֫ e^ :)uW˙y>]#M.~)I'o@!]P8CRL6DsJAl)&e $#lXԚR?f0n3baJI?v"s-JҢ4 Y a_@slDiҮrO4 J@Ag); ͒bRFs݌"[c2È%-њ3ZStY(YnZg ҔN)TF ,p90h340NF./>V@h F)**(D2)J8(Xkq 35z0GtiRPI1S) A?jmC$5CM4Ǧ$>uzû |"3Dzd#xx{/#n/K<ӂpͱ)o7€Nbc:HnGSZ{Kgle"bVTB uTu1{:{6 %ļd;y{#< _?}.Q*C%i5x5?L﫜RѸժ L+7 @,|[M1ć?cc?YWvM{ Hۼԗ8ZڅDջueth|2dZ/nCf.ۯ<ޘ[>ō;{cF7F23ơ5巋jC}f [:wϋiNWA=?1d>oxANZr(( nonmAZJEF9/aaO33*8ؖ'np;X9 oA OKW|}u\GaVG_ߝ1x8]-콎u Gg=ƢѠ"AFwEc𩠒GP2LAmKmbbChD$R;kں9O7~5_⌓wz=3z.񚤉a/XLUeEc]&S TKo Srb$PQA8>l% kG ӌ|W5Z"=Xȍ)fv2-n$S7 !VY eUPRBm j*,=PVl/Q*bMJZhf%ʁRi ZsN\iVHɬVFB#k }=p089! o5j!;d [z-oYH5GB$%l/g}j[n>s|aAIm"ƿO_1n,q_ h̒$\e=eh|چv=>џ(w!W I7z+ϖ_o[Uķv2{wV>ߘ`61sMԗ9o77nz8x?~0}cW̞?ofh #&Z[mNmvZ.S]ҮF%EL[[wr{kq]gĈwR<_+9WO[eZ?]xc& {BmP pQ7,Hpa¥S ) `[-cQxr6e&cэnFSj7F9(AJ: Q3tY 0D bLr&NU':^,]é3ʨ}\L `p;]JDʐ#!N70˖YO^Y1ߨ0 ,* P -+0KHb)*eo ):{pW *IyR*#uUޓuc+^Pe.pF HOdҍW[YRR>W%&kl%e˳̉F zdH RabЁ 1@ȀHE%B@unHzᒷaj)Fs$oȧ %m"As}}\4qsZl%gCi]3RADAt&vQgPHhMJ9LZBuʞ7dV:gov]nφl~~{8=2S9)RdzeŠEOU;m Ïw2}[_w |G]>PuLt?t]>1v]O9E߹as|ઽ).f 6fBrt}t5d2T>Vdͪ!Fx tIɈ!~p7cX7!b& -ׅ]ܽq7WGᑱJFݢ|(uя8씤ב!s,}cD4&%1)1IiL189% A;2#zK%"{[ oƣ_ 4tJqԨA'GtG0(Օ7W.=uO#/Gi46?|c&co,h-K\ڄj_v"q(ȂEHڥlC9ҁ$Jfe/Tʷ =x׷E uJHfDžȴ UX1><:TR֣{`4'L_on"T x=)NjtݽlHhJD'w6=wc.hyugvŅdvn{<{/e!86'og,k7|vkA >cv#M9QwݚnoEx{vihV԰N3hz\ ǹjmp7΢%r=UeW]WN-j:6+tJ20Q*fs9HtyOr HaxVhoVإ 3ęJ~ y-0mCI Ffi5g#gdqga6D]$S%CvY,{A%+2|%iB?ZEN,ާ%q$KWi+wacY2cZUlഒ6ץ%dٻi}B Z~mtzӟ,|}7]/ިpy3ukgQMgi:9*̵24:Mƒ5P7!&҇[e5軟4=rק?tSYÓeqmQql2IxВr7$7kr (OԧDXAt@?!=Л dS.\Wk/ 97A{ .隐gEjڴq746opci[ia˖!4^ px{۱tCvB:e y`MYG ʉ@/7{ݸqٛVvd+fKO[hS^h` w| m6 `}J=%& 8nBeͯpǣ£££>)xC3mL=$\𠤰q}I16zB냚)>|;141'[MC }rgcl\i!Y %PV2S&_ʧ1j]ISM*k@)_ EJa%sv^''REZ0+.UKE0)Z fܮ8y `i^!rpZ; X `i;,*Zƒ iXY\t]0C`PZfnWY"JJ-=HB2^}*7o m0[1o yoP*)#9pF)5B]J&|_M;\|Ӛm6ri1:]yf4Fc;/Ѧ<`%+;OLjgw2bc׿@< q6>jeJޑqڝr0)aRäA]2k9WEWXTQHcL`||ᔶ6)rE3`Hi냚jߘIH,Wn֗<M],rK( Omr.b~zvb슎/JxO4 ͞I^4ldovZQ^4^׫^4^h'K?q؋f=u4@#A?ѯ 0[鈺IIII_ Q b9'@JmuJ\bFRTs'rAhƲ6[j}PSGTCTrhۓK*b dÂGE,!x-|t0YId:dK0dGmjźV V|x^*o%TJT1(8T+Ng̨$>JJ>571D 6rkz ߈T7c|ՠ0Wb5rD2&Ȭr B !=g#FHq?jP`7j_d53RW-"Zejٮתy H5agHy)"45}==v׽If۳~֝_u>֎Iѻa6 ; Woc+Q.቉]9{sꞺ-SQc(MWġ%~B~?l*EȽ3}!TSGdc^_n|O}r)i7eU:^L_gi~:mQ<@;J`.`jvxN_!(TAsxlҞ)3<-޵ >9kp2Pu 4x9 qSQJ D<2$B{ MqI) BOJa i98!8K3s :N_\jf O_r wZDסx$z(>Lvi͊J&;u'u1 -n$3{v:AIdμ4܇1cCds5Gʷۏ'FHEgĆǴ&3 4/}cGT!a7xUD?FI2ɗ)z$!Xv̍/: 1sbN0|n|e^AٞeWΤ3.@;VF\PҦNi.e΂ <&og2m HܦYr jKn tJ‹nӳ1nj@w2M̍ݮfL^sJFJ9S4K]^(P$ey֘qw7\64[`#Sssp2)TPT윺+KLB:shRzwN*khRCܜ+%J\ε"kEZ>V\LP眢\I0m q"ω˟ *( |!eʳpvJۇX}Զo/ʟ~,].2}1C~/O?R6=߮m,חmk:ɘ'ϺW 2ȧ]oRxmBKz/&E?iNͿj /Sziٽ"2aڽ)2d4L5j!5N/ݒ6J N+Lr0>RPsY&SPV"]8'+ѥʜ)BB#RohV+U<n2g6`I1Fm/?3 cCfcUOsR{n709 C嶍k qR0N s%c|+S:AۭYu[fu1BSjf I9hR\z-Ro"#xҹ{kz(\\H歵5O f 4(Tjv ʴYݨE>INMgtS?ǂks} ' ʄUmJ@PsمevbZ%.[6UzY@W qzpҜ{>ee()2SZ@ Vz: IA?+}G!)N[2L B YVfwvXr4Joei*%,.TFɇ2{ay'f}~S`VxZ(; T@rVVD$ zyan|*LwëL3v64ϽQ'L GkN[7D:ƾ0Ǝ !KW AUc&ҶpÅ+7~H)Bܞ}9;N8x˧~!m}׫i"ͥ@ńu=nȋoϴ5H>& {t$G <&}:Z eaX]w}x%C, KĦEk`UEZT֖ZOFUبK.Y1iC~_W7=7QRa+|aD1в^2!UEdoy =@#\ss e\/JF@n&"8;eV4Wi,Me jaf?\ H)/ree)z~=g ^G^: ^u},@gKKG-#G'͈k$'9H=W;%lVY Bu-ŜJJCy$pjquQp{RG?oå3Lm܃\.lX/52pje_t]6=2M7 n?dL#E7s% ׮s9Pz7Oǯ4-f#Q8Zvo2xxPVS@XتU{sA|ݦOl~ҁӪAMd>)H-' /9t(Huڽu@907z8+BWOrĦ'!<^'䷼o - ֮KUGntNhs:km!9EAHT@,z Eȝ (~D G4֍#ڍ'+IyNdbL3%c|8/v0>lG8c~h/.Pb$it:}JCcZZVCœ5iMl;N5;:܌h3ݞ|Dj^hǙ5F^1vOH0kiUBo6-uVAcp  !Se`ng2)YWm"N YT8 (<cH- itvɓ Ll*fM\E'NFvNL'&ѨJLΙc`FKU3AtrSI#I_7%FASZ2Fȇ1vO]̠cKpwZg_ CkJPZRpʗ=!?Re☯݅ۻJChV/3 -# ߐWx##NxNZ2 +x|ZN6uXe4 WFˀgLzc)-y7ZCM#ZWWluÿNUj9X'()ŒEZap++<ץtӸtjZљZ;ʹƊTd6O >xpsH}XH}K3Fw{h`Fz9}?:W:O&Bby̨p3A?"U3HeHTrsn{tn{ BIK(ێ'+{?g[+ikJ]:<ҲUo2-vt=bLnr|e@cFF6:2ӈng"獯 CM8YP\U=$k{DTD$NK85wi"r%a(* _;L!衸Ο\2Ȧ}v߷;,LyA` r տQXzWwG\QS|BZw>GBe+_֝s*ΟT>;|Y> vC~R^nvkMQcf~(d=u!PuIy'`g]O?Hu5\|(́^!cD/W'ֶi4l.yHpJ9y*ٵ[&V/7 CC69HJ@'oA'QnͶA!P*vZ@&ysh SRWؐ ZıY6w&!N7XgwvaW~?cjx9{Bu<{U8{C0wrg*x#aS~ iɽ?;7q=?u|OOW\,}K>Nw_~Ɂq4ٹ9Sjf:^^jIj@o t.:xI̥Y3]id62Z4w?O}f v$w'͉?% '~^y{?v;.e|—RY"8 7~t\ڏEg`|"ԭ 2yZApt88DaKf=qV@i 83N7%Ɔ@|?)70'©l";\Sǥ> T7x7$gFYcp16J ֈɔ@3u: J5?|2OZ<-7Eti 5& }-TDzlV=mv`49H2FIR"%S`{YFpj=#(TAsxӍlٌzTS-xf-.,We;xU}cC~"_{Z 3To1^,7ک &S^Cz -+rJ= n*2X=)qd k|]i(Ae;YݨE0d#˽SUOC- J0ikhx6ٰ.YmS0 uxylPN( ܁qy#;鵽_3Yna͇al5!NG"YٜZd:Z \um Rh&0b̰iNlejӗ8BK|׸BKUo-0/Pʵu+/ij vZ˽_?+?=7ۇO]jGf*,۠9ùҁK(Si]"JX3#UVhi= YO>ۼ;!ѶmqB"ڋ$"qp aӟjXs'w4kkj#IE 8>z`vbc jXI`3f$h@[j*?`շʬ|YY,vNw 𨷗I.l(^L+IpxgG/_靆E>vVykv_qlg5{xz?Do^<G߾AȨ3t<f.«;9..8 ~i];Z;u?/['7a *]uxj˫7N̻M&Ap`qg3>~O_Ke2T\XyGG/¸3|1DT*sZ|~u0w{1t]y_K1OVa*?3.`_턞9?L=?`ujqp:<] r%ރfx?y8':punF䋡qm&ؽs JR %1s AOٓoEonbO{8睗|$ivweh{ssblS3HO?kg ~"I{'e.;r7ݟ'근׋<&+żQ`n΋)}<ƫu8;}7GG~Q Ow&MX8Ս;!+8~nӓL(45CW_']9=4A}ˈ¾ouZ Ӳ:?lϓ-L\x+l-R|>UJ8xOB+ZR] ..sZݓ6un%R}.z.coӗ%δ7[l{ӾӲb^yZ/M=)|-4 )y=4B VJ-<įDam r^5nDE'FFCbR!@Z3MZ(2`>D&R\P8+8T))BH$kn7V~Z0SS8M(=32Ԋ9 dpiMk)4W%V\ GV0aQ8 uW ^,&sM_عQpPZ}AJzzl ;,عPJ jY4 s#-#&>EpQZa|y57}&O3&n,>&Z9fq$bQ Bb%%h<$"E =N-eSKRR !- {lCĄ5^IaG}YmA dӇ/=ə9t:@.9t1ΈMˁݞ6Ś9Br5k<)Fsa-X ynl{C^NQnI Gx!ZsiS0/\#b9X A!8>*U#\Wguŧ_F勍KӍkTʁKI!d\eCMXnT5'$;DZd<o*FE`ta6Sd6PYU}JGCИ!Mɡ1WC))G}=BU_À)TkISCNZ{b er}RzsP"<(BXQ BoN*]R1|#)TeMCm,/-<҂GJ6;kiaqJq YZ8қjmբK NH]ST\8QJ 8BcJmZrm~+PI6Ôk o- V +58rr=\ 1R*zsNqEޜJ#M NS 1T D;29ĺjc??gSN^3m7_]9 )gLj4EkZsJiRMMuZ&Liyjbw789w U3a"$8x0&DM.m`f~6qswC$`3 E7q`,F4GȓPB޻᧌2X(c -W"x.S(Fm߲/ թ 4ǿ跺.7% +-~By5`%Xَ\b%a_-_ QJ`,QrS'̞հJj; >@mTgTL(pU1o4'54OáLTJr$f׫Ak|5HLˆ֫0OִSk+唬940˥^׬B HӄqK2N[\tpW}$yo(P 뵿~1jsk;q8];.Yeq*Ԟ\/9F@Yv0 4#2~(&!›P=ԺL\V&}~8s6Nk*'ii,a ޴ߏ''O*IRzv廦3ʀuo/ A1󧍈*_$e Mkjf=#@u|dbMYMR7_%(cq}$nOs<}ك0:'PL>X=1ĥWez㺑_10؝+"ʀijl6̾,?m:vFR-,Enݖ&y*[uN,JA37}fOϺg\/8GOieeNf(S}jY*NV@4C@%=@McLsv+֮\\9tqt7.cL)8|U)cwr m~r[N,`<싧_=x/_< qKyߟEm|}d/=}Ԡ;K7{nnt3y?ڴr+[c0psblbs72on]i?:\ȳkWO?l~QoQPG~7'T+v,R𒾿aTJ{W8pZq`@Ab3 7O)pZ1/1,ND?*d?mCO?uS۷>ˋO~xrY}%/QbDy>M32 `} S6_n`m3gM3TUZvy'dV)m^~?z!;4`CM/C2DJŒ?Sչ?}S\P'K⤵huԋ9Ѹ>!4ߊm/B'_]~:?\9yX_A1[H %`|c\S"J0$S7(HIQ H BMQ`8h񘬁lYO-Uak7|W0I1G"e)E b+A1RΌ!Jh bjpV e41ۖ5_5)$i08%hFq?kcv1K(3oww+EPC`zXS/'PHC89)) R0)ARc4=`zRb@!5Œ!㎏%^aS|짏ۀyM&׬Ԫ\\~8Jv h4Fb֜Ӛp!DXeK`\8YJ ǰ4葇MMK*/!{s,&pdĂJd:2)O[VAl r湬Hɮ5!RMR|bՂw LJ|W0޽CLH7$[5 [Lc#PA!>`T XMTq [EdĢi `r .|)!xH{[L\#bg@`gwNJo< {8 6Clj FYeAMY_=AÜa£{ۥױz |Otj=2(1BJ%fJKRC1D0f*Q`jiBv lfW坸hXIRTO]r.hsh8DWE JL&&P]ɫN3fKnIPEdc}ڼ 4n!x{UdQ"M,pVqqya~!8MQV!k"*k$dF|DIR7j[qZmt˝։YB6ܽ,GL}s?"IU oY̍g0Q'6 SBe(pj=ڭw]͚ Ԝ5(;lK,%P6(,q (ɲSӬІ~H'OC.5J݋2:r}mR([窃ꂁJ2 d KK5e3KPquUVզbzIO>'3sRrq}jJ J`%`H)pS2%TS,u-ucЪ+4D?m8)QElbnPY#GA˷cmpI榩axߕ0SArU*CNS5I%fWdػ@g2 r+Ss)9Qi*XRB9> X>{S>gny/CgԩqmO6}j_0UNZG 6D)1 JSMG*+Ɩ@E0 ujt _ o߽|޲b{ 7L :L+#ST ʭ)94!*#TX 9{.v!ݻ?~}y{cЫH7ײSݏwWxsח= OTZFto^f}z{mjV֏[}>ۀ>9y/.ԋݠc}oe$ɣOgo.^7SQ+7?S0~N}? _|捎 {~N- =y+=>{*{p>**-ҟRj;KLqIPPFp~ R&[rc\ҩ Ef.)cBp fڽjD 6\T%қ ²fMrtXa XBJgx,JDRt0748E+`|mTfqWUR 3N2hlQX>;3%;0--p;}?? ܄blX^wlP X;s4X.-"!ӝf%<6@qsYL.*0S205ZUw):L ر,Sl\^;wtP vm^v\.\;0SMCq^xFҀD&Ri%f[2V!<ӇNӱRC k`귁 8"1Fҡ@٭b"(H`Ƅ8cPpzb<yUs cѨ :|Hw(&AV`2x No+=>a}_9z6*dލDg27wsc \J ^$AB u{Հ+,e_@; ňf 86(Y ~E?E lɚvW^KoS՚ X٩jFd2Xh^yUsyA8ENFTqAׇ1hUn8I5 uRp;ud}[eHw %gU;DTaє,^hvJq]Ի^6\&Ђ s`:Vl~5b+-]F* R)#7Zc7b kzؠܬ6\|\tXQw#ڠc|?ƖjvbW3j<+5o0,F~Vg."o;2(RQIHH>u㑎uZ'4˿9+ Ɓ Ǥ @Di9z_z1cX`jQ b [-j(7bC{H}`o/ee8ܥ)Yf汅AFlZ *1CPЏ F%퇒'>D| 8|ܱ1AWL]Ɋ9F(+ ,5;3հ%C.C`(뤆s5*Fب`jn@-T[DN#~0a ZXvi vWbEC*Z̔\R5H1cj^UG}K1Nz=h*Gt}(Y{ AQmIt%t -vr5zo4Dww;9UYty? Kf X"(Ga޽kr`^zu!h;>TTn4D{Uf8QGI8̂aGrw1́FF F$J^L3ԙ^}56?TӜG36DȶՇ来&mu9~+vaWBaנi۵ϰ,`of^t3L#ldWs,sqI[ ىSM6Fl&4Ǭ~װȬw38_ ePM>h7C׮=kgXXt|COАKRf!NyI%6!hVtHY\TF:yN@RMb_(7 8jiwnMzv-bPEV[amF2٦$~F77׏߭]~^1k:пݭe_Aj&^)ZVGVo|DbC){{zcZȑ"=pIn`f&ٶvdIe'Wd%dt#b;2YY/Vuʱ$Q 1#)' DPnR(,dB8:RHsk6wNV-oõ >~vܗ/I_Ո@p'3(l46R)22/+hTD (y$<[C ;9dpXcv\3&=B-o-V`-=`m+/}q,&7Xf/yFa"tZ+91u$DDi!)MF:*=8⬤$UPؾ)(&oz& :$F:0@Aq/_6Ts:amb8K|sj[T'Yr6 k`@G%ܷN1 M"!r Xծ7śR(LQSwgD>HWY' Nox%Q-̈́q#3y~Zf:oEarN4\J}i"0R]6_R߰*H3 $0ZN8nܥFYeNJB POyNF\KH0DkOرÌ-&2^5)@jID쉤LW8 4ķP``b6fo^$kꪫ|U5*kS IYȺ$ X`LDhZ88 Mp&f݊>#x ;O<<]>{M#5/#?5Ơ^qk+|uG-|0Ǔ͞凛?כr;O~{dכ#;'?SUxgO)&I viEf*&J" )d#O"LgS{;с\2h29OSexK:X0g+Ia9P*~}?Q=f?7n2Q7T9Y[$GMz-Ir'ww>~*ݪ:L}XpVXkL6 LO!%(-wS:D@0`4?(*@yicՙg 6rb>z?{br❃E㧿|(F_4#MD&R}yܖQ"›cit5Y!V\\r?L,ܡs*0H,PP,7paQIs䆯xt!Sl?4#-fbٸRD{/xq3`{;CoTu6hpང^I%h,XeC䏦_2W;\0Ѵ\\ғ@n[Woe8-F%j2piư4J=QF+htǔJD|8c־)D Mx|5n2ai[ejC6g~SUف_1TdZpg-Ufx=C^SԖ @ E4{%q"]#C% oAr.y\KtShj:$s?K($%:k$ӘƩҖ ҔQ"!0S%T p0QR;gQP8z0ZLfy7b΋;/fˊl%ֱ R 6  pPpRYXlIl |O-Աa 34ϲD,N~6V pEmU'EJH'0u .A/ՙWutH7ʐ BM HRJVG-֥d=vL1(g{fuOw"YE*#8K;Z$R꽪quI<~)]%xRk"j0Q2#Rbnl,cG!pV2<&! %:;*ゅU~djFMT ]ɻF;dEXI4//@O ۑnSឭjãrA!&d_; {& -XT7‘H8?";^<;-M'z|0ηxrK`7WuI0^=[<9ʯpN̥e dED,j]bX ]=񃋡jCwz M1 ygu >ŠߵLPBfi?+- Z U2[67( SOØR\W_t/cN!zĮK kfcU= @u801msCj0K0Z&Cepy y&JU!j8dr:Ɲ3~èn(?M&tHU>vR1TSMe{mk{\uUTTUW{j\egX^1G{^`!B }$6b"l߈=[EX^"\kA]^GdlFNXXԑ$)(fۉ}5>oUM>g5"R|Oj}Ed!x#f4Yy> eR ʉ9:Q-™0EWɯӁAw5np.^o'tATά[#; ;%;N]u!o5\cQ7'Dt Eۊr`U}W!"&qo@G57bޛ}ypު~΃.e k6(Zܷpku!rvBf[ <Һ=]}2mUS͑VT$̦$nInOQaM/BfqԢ:Ӟ3Rsڛ{BNb?AK~vޟ$B:R|A XuR(ݣd y9ݲ=cYEuۯx1egٍ9 XV8ڷgAB_bKN2Y˧RtJjLlZײig H;-(C9qeT'TPq(j0mwd~x(7$R2G,:6D'HDC0he̗qs͗*Eg@W}-3t!J^q3IÑ?GhzUV|D+tPP t͊Hᨑ[<玒WR\Ŕ$ΙI&J8I#!G}~" EXTLKٕ`Gv!l q9“kK¹EpnG|0ЭV= 'A|y#,v(uU\2O:=! iaZ$$`ŵ-K쓑@D4wFmnȖ V1fd 08 X)̇VWY{a>6k,P]Y?f=[ ֶ+oRVe|c:#P}U_s"jS^&ZBܖ+oԔPDU}Q1 lw!j kM׻P[5UaYq \:8<儩i{ri.A2\,/?@֌'3-2w*~~OJЍY*=E;PF^(eN^ \rDQ9MPĔE)k(Kr9/xkyf/ȕowJ9!j).p)Ea0cjj 01L2FFR顬/D`ky4 YiY[׵ifhX:,}B*r*&hZ߻GkIm4"\[~M7ociC[YRDɉk,)ɴ,YI*rEr;|fvvfwgfY* "^FlMYJw*]=] cpcxc0(g[GL `| C%UD B/ UTxchb5ucU8 V1"駣Gbri2* E.`@s&6r%4Sfd2aDաІЉ3TOt{ʅ0`BVe`Op#ɮ Z.Pzb6o4O`z)<@dA['?_ .5p/i}lT,i7s7/rjFnCVO T|bĥnj;nro1⪱%&MG\v"N0"GmQ7jJob!jXy5>cͺD+ 6č݌\PYkg7338Ey݄\;i1\LXk#{B*GZ65v#}5% [4kF?HG BWk#m*Y]ɭ90{%ap|n>v5$j <,)>9@ž߯ )$<#^cvj ^t? ZtQ?Hd r-E62q:)] \;j%bB"vIʉUH6) Ӝ(UhbLG6+矺UIASCʁڴTnwqSv< D?jƭAqbZ/1&&r?wd;Gݪ+hyx[&Nseg-#Kx*a֭{n)o%x:;7 \6Bv>J(u4~&^}:-[vUns/Gx7:ypF+m6A}S4rnwKjZ~XvAq*cg= <#9MMMPC]aQJ'K4Üɯ-'H.;LBBn"n%0 -~Fs۽ -ӷuˤMp]rs^^:}ۉ;B(؅/7 zn<{ADیC(L_ GE"TΥ; {@g'0 `^։`Ie_2Mdgk%hpia"maha= .$K ] Mh]4@mͿ# Bk*o_}yN9TtnUۿ$Ld~7ؿ`عTfZ=3G7qg?37_OzWp{8ɸvן_08JIa^d#iB^ףּvܬ{܋wAQMpފT7Tߘ\/f(#E:wo\n~ /;C1P9]nޛkH?s~R]x@?cI܍pF^h?+^ݼ$?W|S0]8#p~3Ԥ{/O!{?ex8wGu/ݳӖ~#ݻLm_Mes^z\&O7vxPϴ.DfAMe2reפ/7,2Ql!gR #,#0 t Ɯ??vomĨ`y'Ag E>`(k^'ҿ;\M `X4޲}?p3Jpy_`!VfExWWbpwΘ^iL88X`?{;ٛ1X eõg[(~$3*[X6 -(P_,ȢCzeuK-}q!`g,_/ZcomDW1biHɀ *40DOЗŠjq/yty9~~0< 381ED0?@i+Ml,3@@?m} G&i9P͉lh2`Q-}Minl(C[Ιay 9b)HO*0N?TP9NgcQh4ՂSj$86tVTBXXL5Nj/܏hŠN'r6q"6ne[!ւ1x ïͥK6[zIVnKϪ3~MAw"aϓȸ5x.MT*NX!EH_sd*_sr]C+82"spoڣJ~vy^Jr*uJ!ef%mL!/SGr+B.0j \ jfITmz(%ae(iMȬq.icXQ2BC\8<-/V o#7<8sh9&[ ZW|óvZE\{00Ic6Ǥ5L*OT1I{׷i&աlq鍳p1qtJѨ9ܱ~!:nÊab-մ(>Iz_jt8jbD;ZUW^z&b26:q.\AdU 7SmAz~,"_:c{W?łT&?_ݍ`Q[=s~fS$QxŶ,6$K5#jT;U;dž=߿JAs^*+P.t Tܥ#PS}AAs3-L&,M[0 MgMlOr"ſv;j9Wutʳ"/z v's#([*mgE{_zޟH{:vH11.AL(_(`@P0bP ,Q;0$k JGn2;l.C)|F+!uY gM*,>,1[3lڦ`G|4 g w hߵm`- z[_]Ž]Ž'on!c5L(_d w#&YleUY mf;)ѳ*LQ;/C.C])6i=]i}r&8w&2:˂)0ø3ل>Ƚ|&y#-{{Hkc2J0U\#@ٔy7-R"P/kP?s;TOD-dCNbO,JfvWu->b0i`-2 4w!?|Yz2<^+9ЫUktLR,YƓaxs {|x\~VB-GUJ浝|`ts|xŵvBJJ1rI%F m< 0 ܽ3{V5i\ʠG4t)}gc-nL{S*e5Y%+ qC JNVMJL\SrgKogs5A7ݟz8Brbou͇z9)uc]kS+.>gCtBe lQ&؎mTۚ ̌Mj=-/j EDjpY9٭,,{\OZc3.=Co#s(U [c $yJKvjI%ߘBJГ)g[GL(08PʺQ&C4B:JHi)7J‘BH 㢘 ?/,2 ᙝK 1rr͠'B:l s`oZET(6XGp&8e!(qN9ߖBO)8#4;=7;trX>fӪ|Yiغ?l;Sd~*z^ 3?H YT9!7X_ O: u^Ѵr7>8-'>{aoR{} '%`d$=?"Go`ЍARFaPuo)DQkΧƯ; yoju>gTH=Lơ `6K ( Z1F҂LԞZ+it3|n:(˷i^y0h0VNўdomz'M|R RQX;C z5_atq(n+1.242HJ P_$/C"LՖSNϏ5CՑMOrB:1 -7Vj$Dl`AcSi19HU ̦:#&Uu .JxWGq'_D .ė}dʟak 9CpGߦBnB)Ïz͘%TI)D@b 6ԷJ@zg6(p ZS+a^[֞ymxƋnV`Z*grM6K ^ tVGvzwqouGqT>wK_ ^ =3Tcz'rCsH'L }k/I"]V 28dU24$V6K+^`RT _*d*w*&J%&ky~A. lc|Phq~+ aϕ!* @i%lBa0]-W|yu_+T &&䋔nj*Qk~Ns/J sL^XnomanFeXՓyՉ~ZB$^ͱ3X>fM\Y~޹#k&?/9c~~^zIyo4 .(޹L^\M@2a 'D)O= 5[.I;P´ Ԝho*HS0mP(.1 b(zGBI"1JK 5i,dax<}IV$0њ1) t|lJ~,C{ -֥+֒Чm"ƒ sO#?<>S+8BfͣkLX.{Q֒IyBRp$6lIAS.b-kXm=yd "2'w|[dB}$ȟ(b*C^)al<٣\bd/[A_7bf~X!s7{3 2050#=->s" : G?@nnzHRǣ!h΋~L_LlGJk%?"oN]‚Dpu{ToC?;%|1&ӵ&mRL#צ26lF7>< )D˵i# _;0oo{aNnDI|na; f= jUc48SKQ^'$\H\{8h'ӛV{Q G/pfwٿm?\t7%| ݨiw2>;&m앣o=`˙Z_ܨJұX[bΘrº,xco]NDZ>um*vÔ]QSdg>L̫YbtÉU}nzݑ"0u܊]tӫ ׿nu~`NY"3Ž<[m}KHuj = 󚯦n [ְ3٫݁~Z~nO Zso}^~t6չv t_3Go} ]:ݯ iN zK.J\A h)=x^}=3Q> #R#'5sr·.믩?;_ya%`~"I3*{/iHfXQL SD.G(HB*mYdC"F0NlTpA}fƤZu'R_Ap[NIO8xg1*ӟ2F0fĨ3C$ehfgy9 &=[M=CC!R[M}o5 hܪ+ԷVSAi0%B$.`GAI: ӄOr4`(BYqWf70c0L ar1^!\Tba*;Ru1$+٣ <ս a7 Gæog!{{B/Mɨ(vR. a FîgF.$d%aWǦaG;.%-d9<vLHe5܉+=j)29u 2"SKNqU,@};2ad.f,k?#D%>Oޠ ] ] }p_O t]@>-e>-PVBouHEL;0sm4 ',5" ڣID0&ƨoOhjtGNe+j6Γ"  M߇T& ok^=-[A Q|3?_ϗ>e>s8 XD%Z8143P#V!ᘙG )GيJFlNT- ; 4ZA&8 G(֜Ō CfHl l;`IXDB:3`3wFTW8G3Ts'#ı׺ϠǍ^(C%,J2%F VdF]@3t 26T!m { aZ >}Pw8]n x+n M=#|п>q\~ιB0Cʥo%y?,gG)/.)xšgщ;K2 }޾AЁ=f"WI+ވd^9_vnyqx}wN ˝/b&3pqUyGo ׫]Ǥ;Κ|;7l~$fпռC -5wϣ+G# (8JPfwO. zwvzy;bw/R!3&IXpq۶M6d[4n𾘔(@K ? _epd ɐ^ >\ CW2]C9:`B F@)GcCH#g ©i  ƅ6tq6T܈jC]L@ B-ug1S2=GPkMJ9 YPC.`&O=Y60ːB R sI@Ab# )FKjp, Q#cMsP㼶It.O'ufn(r@$!)A# Ma p6$ X3NfQgjG @P怒hUJlBO]X:?%BB 6 2$T#!@cVhIJ(RqPp`V1(?{۸0ORKmp63v lnzH&ej, rRUuuUu]8i,MQS'qf$Rz7Lf{a b |oTAgixIl$6*P43Ą}A$6LwYcșSt%;VpĂԇQ"AD$.A|"eMm5 h D0ȁzSt QeO4 jܗifHNEqJZɌLse)&׮Ͼ\GLV{`qϤAdG{RT)Uc!ߦ#%R#FCKE)eaxI^*2DQW0֢"AJ:VEr AuqY!"kf`fo oۢ)KDJTSN 9rR,2Cx!PHkx\<}@7"{ &T p We v =I <$GۡytrBh8i#/ ,XLdyZU,kyEȴD2<5;?m1#e"D4יQ)˄ђR;<&7% cs k&!8cpzDiz{>SȜUVC,3+hEՄ]i"ɤ4P3'+M Mj<\2}56 K@VVBtǔuԐwL %hre e\ p<ӄ 3[x ׀ TA'0f$1a oFHZ +L)il+UE*!tjoiR=#cw;;C<:T Ɩ5i(r(C(=4.Qڄ [x? \q Koٶ(: H*OsGRўJL4'!_2EH-npO;4sѼ遜!Łnr8NF)ESYR*R44zR' P>b8EW6 aZ3RgxE4Fց;9w@B:=57XW:xKooV_񫑫"UClr]'$m\e` h>EĆEv9ܐXI0ZLEwhX`/ӧPw՗kPC"=owa ?/೒C ixPott1}xu}L *mVg{ܟJεƧͻ'C8=2>ش~0 nI/ONf>vp՞̓4Ibkb ݽ+Epe>kt-:@!]^B86M'`8&6"}0s&)uM Ae߱ē1;wQpЍ=߷'tzo-3DL1Q1_o;:4TQ=xTkZw =4tL`#UͱJ:ՠ68]5!$#MbַO`G0+Gή.2K)hWe?Z8 ?qht3/17ʳ3܉?B/\6BVCjņK6OhGQmZwQyQ9+IB޹Tc=Mzm(13}Z-FH!hvCB޹'@ī2랮Z^>,]h#D>)Ϯ"fF⾹fk kˏ4g2| .e+Wg.BYHCov}9+~@/ֲ8=dؤ_x~s?8s׋yHv/.:X?"%X?mA͙ЄbELh)*#˩ϓ\Hݦ$ *fzzzc̹͸v4)scPd9 8My&9*siUI*}*Sa DG&)U3kdĠze/ajQd*3 \#9 $rt&s8* rpㄙ`}2EW>T "@H!͟㪯_ئJ1L'i剔YE,ȶQ'iв:\i|%v=yPr8":ip|Ue4%گjIP)+KgDggϨg)bPj(hnib,Ѝ(+5hG0 ©z\=ÊP3NQpFbX(9]f6 m9B[hq}֣XR89MV`2u8=|bix02UyNGN>(q$չ,9}Y{nsy$^L(g t'q5LlGrC zD =a97`'Stq:_%K$7[!K4urP+h|KjBM4$pq}w,i4JBz%f WO7tJj9b=\>7ϔԯW.Z|`3iN4d$\j֣u45n,{]0{bK:q"T/~4Q咧6lk3 @~Z~ЛUzS[n χj{ʧ OЀ6YO}\"LN5|6n06S%s[ yR!P#Ka:)Vb֌=m:H~]zX\q>6aNZEw!5aF _}(ߨqtCX.kEϻ0-VaU@>t5sF0 r/3AyϺQ@=k[$̴QƬk/uߴ91>x; &+eȮ#ԕ݋ۻն_>L ~ʱo{X}4Cflℑs%Tj`+?=!"БCwgR {I~C܋B+ SfR?qONhG0|zbaB$rtT1!3κ{{ @5@]܅NYZ%g*:xjc^+*xda9E^:K\&|EŎʧEN y /-S)aRY9 cDA}82./hbJu$˵72K64#(oyPB[:qLb&o}_D^?ۺ1 ckgqulUЕ'&> Χ$ㄔmGЉC5:Qf6#gD[J8h~?;ltmt%а~A`Ԃ@=ƅ1S^G0 :%VFS;!#4w1jwc 5pw0:O Ơ %3'N2{%DF!g*,kBʩ (X2ޡ)09b7d{'  \7jRƪ2Jz\I>ҳԻ-=ޓ6r+WⵆG2[{dl^$0Ir$y[lIVh>,1"UźXBKpy2!=;;H"B/l@qg(2tH4f%J(?'#=o̸pC| lOwܯr *(r_`'qbr J8:;2 R,Kw4f\ "O9`yz Nt>X:ӾZ>#xx~?8gBZ\$3 ,Y&Rs:ItɈ)ΡL:4W:"lur=t"`w1&r ] OF 欢>/~Jw@y[^{װ hx$sv αcPʉ 7mf`R vd4(pl>J',Ϳr @E[ Q*~tv:e䔕xEJX6 V; lͰATdߋ3ԇȘHogM  s̠J]bE5_?kO 4ƾ~>v+ZS3R 'M2DsRA8v57>.Qǜ&TQާJ{Fчθs( VrN qNc9 %nWmH˧1c8a/,ڠ@l]ƨ/.hR/,V9B85gbTd{Jb:6$ƀ@;`7,'H}+N}C쨠GsPTIJ!.JS%O9 rBS1,DARD Elw)垖onU\Y!aM0C" #4Dvp8̦ 09lK=@>d6l@i".l}}KVm6*3U8^ 0=qAc ĩ/$SkY~✈Q&ZQ}x.)P(5m5V\9C`;Wk\ ^)<ZKXT'*VBj-0uqŽК_P̘LhjG{@TI9Hs";e}8@Jva^[o(~K|R+p֨YarFNH9:Ehk{W,킒1m2ǻn<ݸ-Y!\ȡd+<蠳B%W ӌJXK#2[%2q,K s|njTRnCOi2r7?f$ 7/Yx/3 s3ʥ^K;fƩC` p*b凇ے›3 .V5~w GۇX9`>ܿ|Er]S~$cpc'J ®ɵ9Cni&>&>&!Ѥzфz(KKnĿNi-OHҔxx)p EhRCa2B3sur0mjY )0rdjxZ|ܻrZ JIsS 7ʦpKsdQR=.ZK8" KAY.sjv&HŅ+gF]qbE_ywƿ4"*~#.9):8ogvJq!PPrM9e+v|]Mm>'<~Wq\MNL&k2I 1~9;|)S A1J-h[aTV5XfH _{דS:)M` QHB= U?B($!%l*u.$$X@:|Ы9;l2Ʃ{0ơ'ЖlAqUoFh*{M+ qœk\j5"I+I6Rt\֣} E7oޢ X[S܇xrS;ՎߧQz1J5{(K?}͓&hmfbA2X'@q$[dzW!eiw{ZB Pq0 t~'e >~yzʽ\c\|Թb59Q.PͲPwrX'0Gbz&\y{n /Z^c~rVפ9+d[wOOL3CZ7ɸV oA!|HD]6o!iĸK 咆{|[C:.iя"rmc:p? :F GZѝ%[,m Q*uoN9Bsnjr 㩝gxw,5ZXBk Yo+jY3o|OO:Ij#ckz vӼU0Z# 80uj[8qǏ G5rpC6)/pp?;w ]SP[jꧨvPc^]uIo9=`?̦$nDPQÏ ׳z->-&~:I*1I8ǎ[N0,0D6y3@i ³e:E 4/W(SSD5IRH0Ew%{= 9/Un~{b˩5A5)W~]—^MWY̓K͎_^Ez JЙ̑RR,KwTg6P'f\ %B6`zD)7^]ŕ*7G>DPV ?vv۪6ʴt]SHUHpŐa*.*[6g\ qMt擇{\~⮈DVֻ;f ^RKmN$J˙풐c)abj8sJ7/((ɎkU$O )wl"nfd,)ւQ!GvJ)&Ao3Ȇ'+&Mn¤P's,a3fwAǚď>q(Zmu&݉&D3Sɓ`MД\SEhzYiH`F4{^R6=TFʐ,L[Vmqbc&]fY eP7tJ!"JuVРҙY6 jdPlfXpz;eOO|Wak\qUyk,F@3ePGr#$839`4#(fxY,dV|sXϏ_Zut;)]׫Eٍ8r'7CPnD{)~=2nc/ӝS >Zw,a+>l^ )~zo>h˹6&N *1R 9%:sṮ4%X)biSCH 7u(zQv_Yp_iږ\q"?oCӒձݻ.L(VۯR[1.PL妖VVQߖLT'ſOV*DŽ,72y2G,WyHbF{D<ϵ+`:y@c€loe]E$LD1QJB~!ʢ9-7`asɘuh`9b2Wgri\Hw&m6|wU~8ܨ%;-J hk;4,ѩ p00B9I[sd+mטyޮݹHBcKyEQ/"Z(x)aLRk*m5] s4 yߧ k?R=aژvnfZ>c)>y0sD/l< 1vp-]MI& 4٥G[>R1)7?yJ*e)GE0c PHz$!\ثF%Z7D,`rp׏"T{Η(-\a'c_ ytS"A0e5dLckPfX8 %&4&+iHHSD@xh&gV "4HhQ`r!Bg0&94(2Pb72tY*ck9MIX`{!s^(2&X4:0HAFhQr"# I4*$} j i.)Uss IhK5^ bXG m$AQH$þi+iU#W2 :F6}J99Z¨"N^]Ӽ>'[%eLZ_Ju&əVLNz -嫻=Ѡ5 T48Y-h(=y`:dVQC㺫20ӹ5]1L#x\ V T9ss&97gss&97giH) !HDk(./ɀ[xb#2k ?pԴn=iГ=iГft?*9pQKk0e59Jh\DSbA$?jwS|O:˧Ve \jz峧F0 2TGЉyr^s7fq?9zُ2]ov]-)^qsage) 0h\٫{|I؅HONxOPTRT%..y)J.Em,7&iqm-MxJp@V-7HdR}&U(B4Y9"y6<&!Cg!Ozs]N󗾺|"]^eG; .ϙ[WH 궁0Rrci^o8 PR6 }m&z Hσ6gQI8qS@}ebV^K h4o]Y[6R`vR"Et0F̱6p#yܠ^gL- T8';c˥?3woP fwKrz;TI4:hH?}{t( O9&'7+{{hm~P|^}%G ZwDߕ} h8;9[he,0+~#k_j$GA/)mĭr[9 JM̦S5-l85٘߶`6ŌI * 㭘Mh f _JڗJ4A tgf7K O ,C^_•ŏ7tPFA/ZA;"7Gw RS>\tN(+H (zxSHBsZ4{]Gw,UtG un,-CZ#(ct[@oH S#q#ƃ3d ztLq 51ӗR:n5a;8;D-W'<:U, bxSHkAFLeҭrLwy: `T+`r%2zF(EJf{j5'"o!LA!L^o\r:w6z ^6Z8JΐaeVQzM#>VS5}FI-Ù0peR7VZ tɅ+@|t@CeY k0\*Z2 3}!/$8du =O=D")I1)( [: eÑp'#5 b$,zIX.wd&aYΒE2 1n-\Ov\ Gb.|\?=P3#S\TLӃ+)iwꃞ] G+98TAϔCa;{{}ǽB>WHv}]砷HoeU,NVjzJd5:D ;@Q@^^D גC;D1hzxJ K#pBT;^p"y}LK/; AN6LWџ' XhcGqX;Ț9t׆ 80BnHO:Y+Z<=8 Org[zCQOd h SN#}UjqM 4[˄VSlp!,26b(`u0L OK @c$P#K%c )8B\ X:+E0N0EnN@@QXsWH >)>cZEUb #g9dԆy9Ook6w~eY*G_?akbv?O9ͪx}ӫngsi&Rٖ͵]Z\U@r9{{Sџ 6&NUd@>FT׷! %ZIu\!%Żj ZC.)ʪ~/Q@Di)kH8D-kE.ֵ쫣Jlݍ*~ !V]]WUUm{]rNղKBPI[P@.f[b }&-ݠaAT=m/`_*6qHUin]tABuj]f[קm@>F(nG7.+牿y`s~SÆ M'ohuGWkEdCL5Lo:[aB @i4ÓsalbBQ '1V6.hhʒQ_ӔI1} Yf_ ̶8W۶9'cUJ'J,)!x rӧ' pg~>w]LH͟r5r5r5r5i hoƒ'-,U KB`М%:-qTz>5чf}8<:℣qw,|4dc.bc_..Fz9PcD L9Vc5X RWDY!H-6d)6xEQu#Bqx8$'pY]Esit3t19oGuB~?W14ŜѬh0{dfakIؓZ +tQx&j4# 9-7x߆fTpkCN4.Ջh!vdAݹϕXܿ,~tlڡwyDOv]_CZI2CAIMT{S|Yi'wd"7vh4rLgפ jKաyKy׹=SUotblJY+K9D?{ܸ/9TpGU~qR6[$sJG-;I6 YiYH"ic"_F_AI,z!@qw)%)LJ G) ()4tI'LFXOBMvb*42. 5y/]ػwx0IͧD+y\>={L H>~-tA'~.߯~{`xŻjO㛀#]LUBX gOKf|O$&/.mղ]dG7цHP@ } + ~(g7T &ly Q|$SS ^ . +,Wı#3kED*'Zuvx's2S!2 jSR}@bHR <8(= kNnEG.+"9X"fX2e3N jn7T;5+|=-iX`T9-ex܈.(CՁFoSJSK8*8Hj)5pj12=$$^ i<7hUV/Vnz)Z2 h(rŶEٕyCQ,9y9d Ń~OkuԼei2Nׯ 4*r+[iߟt+?8*.P7NZ YI/=ST*E˹*O\'` ë'Z0:?cr]붽la7W. vX6+pQ#W4խ Rdɖy"[r5SƏh#Ywi/*rKTOQax̻e̕:EWID%B{HDif"!|q Sfl5aQRCc<8~Hf #vI$*Vm~9;uvӫ(%k2hɳb1*[{V%%Y[ќޟd!emtZҹO*mR6.JD?~ATڈ6ZwXɢ`wr&a3kVv.WzZ@P!Nwu)g$'JXEdb}6-c1q9 xCed0rEvŖ~x'319qМtɁOR8TFCxSJ${;+Y&JzJ$Y(5ìYoOwd4VZ }/w/s&.VO(_ jO[CVg U1+$6өèEpD>^*m*SǶҭGPf#h93WwWS~Rɵ%<{茄Cm-`%x꺍ڳ19i@sNhӸſu Li+)S m w԰z(]_/΢p%Y*(I3 4Bk`S*"FSf6=På<zRCwtB(n_cEd4wRgEuq:#Ke79,E'3]m␫o$JEfUQTmFyfoQ0y=8>BPp4|2{Hc`e`2P|\&"߿|q!3+,FCbS yVp*wMWyjRHS]f=rdtƇRJdCBe~RZߓeف` (nM5mf#S 1P_\⪿w6.שST&C+fUQل)EʏknZ3hمP&ZQ͵nP d@rUgn'YrZE^'<8YW0'nx+N^cM/-wdZ>Mc]u'rcaؿ'U(FȅV@HY"tH<$V4 b1Vҫ5p/3F{2't}})|*BTeRJjDj#2$ WykJ XpWA6i/2GĮ*ŗ]uB,a4kAmiP 0SEGl} nj T B'T(%Qf$$"،@; 5c:D֘[e~f4Em_B7s#$%_Nq v)YVb N W+՛ (Z4qXry;R) k6+L.<ۭmx^!u4P!ʴޟle`e>&6H/SE'>Ls)':Vv' 6nUC$!ܸ6u$ߜP]hjP|tUhIcU 3VS0*K83wuSxq n@!p$:M :B,.m}spنp12Tsc|c+[q8)xmaϘ՜IҼ8`$?`q7dQˣ?>:;4BmF(I*nؽ%pIW W!W\V];Qcw5h$?6T=G_l&DtweLx]"Px$Χ,Iu ަ_mvz5J T[%0j2+ĥY%<ĐV׆rs69d#se+1 i^?6:).⽍U8ĕ'PҾv).xJFݽ  N 殂rV0)jh\돝gpoP/YQBd-z[o l6Fw G1ֶ.Xbm=W/3q4=6ނ3 +! <Ȓw@YCPx۴u :ib+/ v;_HRPg˽ʘ.A$0,9 %#EwBmgG)w2[dU&Zҗ-vf4l〢@P[8fǙ_a[2[Rw}!#^nhzo[〦ǗT֛o91+shW`ϙg8#J%1Q^ rf$QnALˤ Ka F-(p`+De$BZXh`D&uվ~ȋn %d hTw/mkƾd</\8 ёh @'\ WS3[~ESOkBKC}bL%KN]!W=s.epr{7KֻI$4P>3G{׊z L0s%{#mi a9N{=`_x2&jpԆ!1f^dRy9yӟw~aX>A%!ĐgB-6&%Q|2j_J8v傘z~q)j^݀3$CJ9j|&x?u觻utHhskP+Ml4ăSCeJQ#ѩerƪǤ ǩMb!gGogD`r;+IW.kn xԗCCZ%(04fY䦪BI2k?)cyGN9/8*u 7=of!M[^Q'm<9n95akKTM[5aTz_/?AeßO|$P|\}>No$ҋWwDžu sn~b4/?uƊaL c 11&790&aؑ1ormٵ|2KCŇǠNͿ`kh͡Ԟ~Zw&8tkF\kpS>$w9 Q Q>&o/1wKLMQ^lF4KZ)m/Twt[bPt^ *ƀ6(% /en؍bQc7KsALR?_Q9~j6IKwG'P2ư883fbO蠷)YfTumLVN]F`1S[TH QҲD lab f&`pA< qHaR ~(Io~|(9|J֯Cu(Y%sM]eA0Ԙx#卫c^+*#mϵkߑHFyd6 Cӷ!*%44.umӶ07DW\X07 #!$"T(.-Ɔ:1cn42" b:dbHŹ8Xs M8`I*6#/$eepAl/ U~  q  M/J"_a;RNd*%[4joۇױXW^*"|wWp;[qyo^YOf囯f־YMXyTVwEF.3\cBQ^!*1WT{!h`ږBP po+A TbuS l_l<&tT\s[r4!G&XA% U c<{-b ueI5; $× 7M#e"e wFaRq^jPHPű.IGQ1CU<+Eӌuw""JL׊tRD)`[È[p.Q0W؇`3pӴqE%"CVO2G}ʶ$×"7R2IE$e VtΉJQVJ)tX,.dpf#+)$7R#z"),; ']+e2fFh(Cمt`$ѣޢlL;0Q-;ua-sBwbeZ#Wߋ ͻe2/.t_U=ۅ_)_e"lPyO^]N^L$-@:fObЍI&Ȩ#T:>IO:{%H7j!A9uu qM˦\> -tB1 )ɫKZS߉4owMaB$.iqߌh]`}7XLoݚ̪qvY:%-"XS=sPcA(GItf&?}OcܟcQ9rS',y5~)?-)y<АϤ蔖oLY(B'Lv >EDh$@ǨWz:J=EAɈ31{'>'e=4+g䚞Չ"Y`¸z5 6 em,Oufa\5 6 Q2¨~z]Ybc*q4jYF<a5/`ċ*\Mww&4q*syD82Z:YJY"$/HnPUR2\ey{iH6F$!b"_^L\ʕ.ނ(z]& ]݂<ξ9F9k ÖLumk;ɋĨ66.ymǶ>a9"r&ƁUɬPV l'm"*JgO_&^Q3Ohj"a9I]Ssj)H_OxL o:wSb wK{9hzIQ`V4IPF$ SVj|i#f[G=XeuE=-s`ʸWb)olx q(cJ|; 楧 3Ȋǯs[0+lfY;kCf=4kE1WTi3Iʑ1' >:$]zz]o^E=["#SIf*D#uRǺ| (C[Ÿ-|P\ćЏ# ʑMЎ!F9*RpB.;B{H`%\mÁ#hHmͫSsޢLelf%l 7&an!a}4= 3f6fqo]XPait{rCc2<hxF>R┉#(O"/APUpt7Ok>G{sgDXP!)1C}-w]+UrMs`|3Km(ʫe]ޫ7klD|u(zZr*T)[IĒ8^/cvQB= xu77 ];9aM?U=\eI+ToD3&紘DGd_z.-И՚}oWO/ y<˕v-F|UDO93՜+0'PSՋ5 K:%Ɲ/ 2N`BZ@??n~|(\J/lI)9hm0V-5EOR~6OA Y[4 q!qyPx)7Ia܀`K#Ջ\z*b?)phj? |7#BFj3 z~_UxwpGC8ߪ6dZw@W<%tml~RCkK9 b ٦ҏ4B=j0&G cMY;g;'8rs nh gsn4љ>naHnuGΩήNjRS+nYR2*5mL-ZI=+cL$s60YrҚ ݔD Ce1 ik,(zf0Zd 'ދItj$Fk˛&=z$'Cea.(W4Ӌ?;0H(G̔k='-0-Π42(B:0HY !gbKc!F,{9"WOoqaӅ_MEB}:5&`DFh}.EY%!aFO/HiS腒\Ժ&o2[+^˿Im(DsɜA:ޭWQ\A\][s6+.MU8\5Oko*U9gS;9lHزciL*1JZ#BS*J˅&/'* 5A{^gAYCK&ŔmzwsYg#b5DBGxY X.]+t60M&q4Q`s%I7ǮD3T}Ͳ;\d}(27q;u|߮b/=a. 7U._3Fk(3!V9%X A^.8vdKY%* mh#x g3Uo"hp F2ƚ)b6RgzZtM!2H=#Hͱ #\Ѧq+/ v UOR`;ʯ+v/5̈́#*NỷI0wVdG53|ȃW#@ެ\%Ajjov4PnE҄sA&.eZi4ygWQJyJ PiFWHK Rɚ1zNt*57Zø=a |׾BUٸؒ@Ւws4jN7tbG;Pv5t5C (1XЫRCr&׌;ۼ3/wV#uu;%w*L#*jCޒ z|pltfG.n\-+hd玶AJHGl*:)cuCUW7>%Ơn 2)oWBp'"ݨ|l#O0| 2k&܏ow"&f{WOn)R0WvG\N'e=^MY_ B7{ 9-FK$!Aq9}/d@~F_RskGTF$RHR}?6C`8|M!zoE 8+k 5[`Ez+QE:VGǁκxD,/X]uuDkp~N8fkU#0{P@Mw.A"VSi(l} l"8vml'J%miHsQ.ԵnB!.SJ]<K /O}ѫ7:ij02a~!XaEAަ*tNpkhХڸ>YQ Z$3X Ψ+OneK[2q N Mp'sk}!_.>E6Ϥ!1Avd$"ȕ !.; *@N@ A nkK[ދY׌eѷ'}?^QAᅵ)Ed}epM9;/5̈́( H91=XYCtb 4`Nl0{1Pt`?ViOn]YOAIH_s!Xo?x7O7n{w9lt:z$E )BI2H!ѷ"L?eYQ3tT=jҲk:5l19&<;Ifs%IشG%@Sʩd"9`S!LSʪ9̉ZacmmVy1B:2ŘNQZ:dM)4ci.ES1T0cC9K0>a f9f-!'UNe#۴$@#i 4͕>Z .q ,\:˥Tq[l4~U_6ŔM־`p{!6'6(K(`AIoxӻLh],^KNJHm\\I.S;0m/K}m JwhkKvdEGڅ ]:Z֪cfW%zeWu^;+9.0."0i Nc!,x!n+-V)W)u*4*|]ЍWߚWQ#'eT;V\[KVOl |Q<û_m͢E_<]FtXys~'9Ȇ5/N3\pHʩ|Y}(][rJԜ]ms7+?tz?v6S7$N;&V pZ7~-,`]b &UΑtȵLe}pݍQ;NkPd՘E_f0wNq[?h~uщ+)[Y _&InJ/ܜw{7y޾1YMի )(\~Ŧ&/c2X;IX%dI1tF"JLkq+q 3g|8WX3[(0-i[ !Q1v˘[FYpM¤&LTVd_'xH]@*AIqQC7#L"5x;,|8[*oe?RS .У'+ ƹ:M4N`0AL#m4q Bn9C[MII 3g 2F[(oƑŠ>:mƸ0*@DPJ|g׷7vMVۆ[É2! ^j0'()$Z`F<@M1o>.`Xf O.K/$9X_=(F`!o &q($r>\%E+$ɤhk?p烿DȽMJ ȉؙqU0/Ij&Dk EbbMHHC TUh*2+"Pp` <)#;mQ/@ D P35RkGGTC&C=MEbXA`"RJ*+#s\2(Xpbڀ8}l'HCIcˡ !^!l 4ae"/&I~iCH,5#ѾY^M?T~ķ"Ԃ3ncR cՌ\_}l7=x(!gCOUxtu]Wt fޞzH1rN~N.=p'Dq-q8Yxw~^0x}n:;p'B=m8r;)zOwқNlq.M 8?`,eP<+1eJS bxά?}*b=m~Co~^7P!}0t>D}C9A8jg!uVkJB±H'ӥV"5ubP$6ww <2Ww,> p}NiqM8PY?':ۻjfc/=Zw_Apg[v8!?zw 4}j w?XOWy^|jZE&FL.}t\ƒd 82#[ HDZcΙ$u 3KeZ,aZ“.(\0_)poF;ɍB,HTbV$ [m KWANhJoj~J(Xp*>aIOS.c|ilׁ A A& б輢 wT4MqZUOחEK l ϥ(MaLy@jPq_lƵrՒ4֚JI1Y32Ukv'thP>CnuS 'Y ߒ5ྐྵ  cs P-1Š3 zq"+7ys8P(uGe&k9#:n1')G@qs a\[UR5Y0AloQ|#S~lCDJK+Ӹic4.yU[[(>gSfvfWOúΪvHQts0rsua-bW (6fU.UX+L(?rGx^XHχa'S B s֤D8ä4QաO0<59ݣH J)!)!? %)u'ϱJ +'?JW,ؓzM)<m|Vvu Wב7K}Ͷ =6~J%I}o0k9PF8J>3% ^"JE~rR±4\XZAϵaO/2(gr$+Ko': Lꄹ!FRS ?ojƹ]E.:Z猉vio$56}O_[0 k# f!h.rZZz_wQ"pKyy HU aJ`s"fQk踢[x@[T@TȤԚW=B1<0)>rUo\9 N3SB-н6 劢Bx"o FsG郸p,ƺ%nBZvFqRo<O a7gp==P-/ƫ>bӖ^(Ej#sЏIAAq^bla#:w?g2iuWįU[2ek3-9nN HCUEް(V*,&sA٬u,cjeko}D\YS ۨ>䘯挧"Y7Wta7AY0H+>i*E/<"dEBsL08~TzkG2.fv6ZR0o|=j5p-n ҫF{=I2C uRK\gB(8//ͅje_ &,=bʻlW7^K&Aԧ캝w67YVhA?g6 FΠw"^N&TAC2 Kvi:Pu&vq2a1^ nmKܶx7 8s}kGIڀIH~l v/n{ D;}݅Qw5|=|L#٫_磷?f#CS&o \95y`M dS‡Oanp)M Mm ]=ڻYi x迧'_}ӫQ)o|m?図W/ѫG9;>?eɛ~?:?ͭǑߞ~ct?>~_>|-޵.Y 7V_.[V?egM2!xE K{V8mGN vf_A`Uى5ȘMVٓМCf09zi\ ̩"A5bixG{ 㸚ѻv||z}+\ύ8 [ E}+Kd0|7u˱2n ZW}9rs~}>_^yk}»!'?gW^ MBPn 'VuK GWWGn/d3G}5ʬf=2 $Vq/|Ԯq_ΫHrMxּ39.Ȉrr]:9(H(HI@%̤8$$h' ww_4 9ә GOpQ(tX8fԺ_#8[L>wB{%AUcYI|@6w/,X5иQB.s|kE޴B3NqTR T$'pwJ T SNKƑ2$(RDƅ,TaD>X7T"HpZ*5ADlP"0TǼ*7$b(#okHOIt,Z߰!] u\G3aM3K됂VN`+AY4$mfc`h2C8E-18gy3Xҁ.[KzkIo-YKz:F}ΒV2xuza`Ľo`j,K$UQ%rLP@*(2Sә -ɫȿ\Gn3-8| Ȉ~>nmŝH{Mx3X\e4_aaMD~K7wzD~KD",J1ekmOK.$Hi5IjSTD^HȯR1/{%BHɩ3L }5y8~ЗB@Gk> $-f-zJ&OfM:!G׳,3eHt8!ػmW4mǴp%v:ǩ3qNdxVRrd@%Q(^DG@r." aVķׯoۋ/)i$^Ze߳dLǝQ8~?+%-q0sR˞6pMܗZmGZ-{&$LH -squ9ots,Cg<ͺUzѣw7DvP]I}Ԣ,\YAs)? @7INJ~T )(|r)fNM\VU@kLMc-?v.ոIAӾ#양Ax#&nx yܶ"$b7ee`T 򪅀ʆH>Dl 4iD&PZJC޾::-c 2KzWDڤm&X㞎ݻ&Ɲ:ZK !w̢Ю-j!8}VۆjHMܔ~ {Nk^U,~f}˽:5GP&ZLkZm(34_ݨVmnƔŖ y@+YZ*6PxdAk6jn$An$iț5yn5ui*x :y@6ńð- -\m6g;5h5-᥵Ʒ)\ G7&X g8 N'tP]NPܖh .n.BH)jb@HM-n,XSݶccx".cV푆nOFn/޵q]]-K|iy>N֊ 'Ж;m6 Ьr[ѳ5#(1=Xk!ˠQ|b ش!U$aUoچ,t֪m&,A+orQu6q{Z1aI;\zM Ǵ+R!&ˇTuB\H:7m.X_)d4EҶ_PR:݉ӊqUX/(*ǓvD/*Ò%2߳5ċ[je*ʁٔ{Ǝ;g4b[arE=muvLo]Y[e&]mUa[&6!  1*UasNXSVLhNXOEuo7P+8,4VYe@0B68,n'MC;Jv3:>,"VS590̊oQ^ա9yO 9ƥMN̲U㹔խl) 䤌5-yquBrqpOk(`l<f &V_Hm+~mXOw)qZx'[q|2{`JQ 7INJ~8ȿJR2\"!T!#sR2f|KYJ$XLh 9Peݗc#36ICDy~ϊ4mTYIFJGS LIs*HElVW~$v]*{e`tgɜ+'%+"X)9-IF=- Muc\HJ3]]Ѡ3–1ۋ7=^TJDP0͞8,#0kB6Kv?p豌7F/x3U-3#:A׿,:T2 ^Z:p:[t\3߇}<}~]ɛQ8w| .|z(ʿYZww^9=޽}|Ia?t?'uHo}ǧ/[y?'oDgϟzOiK47  "%*E/V0P}_ޓ1V"@6)Y'_i$z'qaYx&NهSu{r7R hl~,q_%Uo=ћ)hE?=YTMzVk<I??T^ K5Xދ6{8\_G+ɎnGeXG;@yq(%^|U /<9;/Me1)]7憉PZHOg^ (eY^-]~i-;@vWo/\/>\Z^}yR{nM0~+yvE^"y#A¯hrcL3HOjM_/\G՟C?p4??\BĿ ї(+lcYNMWgsr7V?ϷrJ{GI*X O}uT 9% 6q wriN6ߞѹ۴\\MN0-xi6~ h \x?l:U |>f&CfDLV9yaMB '˜tx,u >|Ѝ-eL_F*0>tW&-Ηtݐ%" '6~P8[.&NuI@MlAMV|pR€C|t1 ֢8XKCLƤ9.3b1!ANj` l>P'ZqYU*/>A30aA0;  aYy-esrjd(ߴs8TAx; Tm ݊Τߙx[3mRl[|pY\DN ҦbAnSԒJ"},iOt(?ğUe҂7d&wm5@ÖpL )n7^2?K_bG='w<W\yV*U]ye+*[ {?wR1X^WL|𡺄aLTC_J2Aߪ8DкZeC7'V7ΫW4 "T+l8F{_CedQӱOOŒGFMiX4$ *S0rW4[LhW߿{{goq*ܨ(,yg]h<䛁G].~ACPy$A afbQmhfȜ[:\.5gner77R%ca8门4"TaߤzJp^ʙ0o߾dh[PHEyֈE`|47Ue2G 4!A3yO^y@fSRHm K :du8hk@64sgf 8>sϘ=V2/ +Z`;pVleͤ~ E'Qq ^~C[3dS%ٴfk#;1lCafeHٳM [#;:d۩VM?n~@RSj Je\Z 6̐6иmsǛn@nr6ӂ׆O4qBa-R- aD 7Ǫ&imh$ A[?j%blM${}Q1rn4RެE[!mBQ :CheYFWIqǬ7/cZ\K(/ `.=CwϽ>Oz-N~ w6.O^OYȵbXwoa'6N{~h;_Qh dDɳ|ds!mVNOl6iunyY;^YHZsFBj:P>U+,VGu>4B1Z[aJ؆%Mj&Jk7QUVR+``xsxld'm?td鲊Y-D,2OxF>nx?p8h_ n?c (<5gw Ŵ@fBw$ٹWi q$INFȗ9*GA'umGqGt›1Fr) xR-:e"WTyUy0|Jmzi#qa6 KR>?zGM$c+&K&5z}=c~x5^X4 xduZ6%Zg˭4/'+D|KN %%IRF#Ň&|pV$,Ζ?.yx,7uKO}'&j +k=P/rԊeEj:^yZ hi,?VDH\XimQիϕ֊goK6'j=jeu"#zc!@a ^~$d\̍-q>ae_ZR\p%e"2!kXw j2d2G&$gqܓdTg1nYj{"5SF|@wK,) e TD: 7$0:"]'"ˢ"$03ZY\L^.m&t4w D"\]6B ~q~?iٟk=0M!ʬo c*剅ե-lY*D+^q sJ?yK"A*]v vGKoSE`-X _|g!QyxMry $ۤ^_۰?MG!մ`Wsev*QPg|hJ}*]Ll֗|<_{AS"b>x~|Fk@tTXsYZYܰ u3YQ+*,Z55"P+H|k8e۷_3U# g6Cl/6ƻ!VW``%-{mow7&jlCO#ѵ~?ـl!jC6#$a>vR ڷ)+ZVWߑF+Y9du \k&嫬(Bcί y9/?yӲ(F{^y,Bq+xcY@[q#̜1f PD 0-+X\І% :~%) u F#R 5N#c4ue< +L ֘#x;l2 $>q0b74jܦ ƀÖ}x[i~oM_ ]o>}['ëb&2%{҄AH+?,t^;8緿t›G9yΛ?yo;|&HI{tpWE{vMO>?Uyk'a ks:?v`% Ս`4{B&a{ڝ]h04i'ף/C;ZL)nw]?0Z!$S0D F[S&`BɗݿAfU |i\?v)lLtKj.x*."w4MB|3GozYG (;d2\'` \ I ߌo;gcu;6gOϚ|??s mj6S> L#f7a,e߿eItS:vD& Ϣ:֕tE:3_Bc2D=H C%s<UySX,W}N17cļ ka E>(>E-nhA2 a12{"0<j#bE)UĩJX]Xf ^j$|9$q-3ň&0\D|#9fFZKB"K"LƘy!Hc[9ׄ BEsY$,"JZ\?|uL Eq=l FsPUp݁:߼󜲡*1)T"xo2v T P%Ѭ:衂Sr/VP;11#‘tR+ mciXd#"2LJJs-K*FP$#cA9|J$T14ԉ$QK@;+?tI w>254d)4in}^ofF>6׽aZ ŧߖSmK)bViaE/w GDpC# ˦g3vGO kFJ%/Gg_Aſ6.aM0)8 In!Fh"+}LHqc+8#Y(|dk9mE%1X.mW^' XVףR0z3Nl[M}nTzv?(W ;F68 K]i1 f[&L?9p7Âŧ$[LTd"qni7Z7` Bhp:$iE&3a_8Nd.4W=l!ap{ssJV3g- V8ND1$) D@i"cI`KYŕE,r Pfg~7\R0RRG%DPš}+QLp]| mUOIN}S5&\2&}joziªڂBP@Vժ1I=G^3 S)/^ KOGbh L tmY (n={>}qT"#<^'Q"gE(Dc^ u c"$W!Ef>E-DyWKĆ,DE__}iWCFw.DO0ZP6s̲H.xY"UX돻+ǜvgPԱ<-hbn(Dyd3)mVJƞ8f ՚q|i&gW +t"bs }7 < ^Ul(DmAFUB.3;&a|}ABr#J}Wp$U0O\gٳBH*}?+) hvUM)n(ResT)x-;1ݺI0<#KM 0"1>q̌NbLQN1^R,*a"$()^rr%_Q.m,Gp [r^^(eBΜ&GaZķ 2ʥ!_PlcFP&}QR;a^"kcy)54V*6se\xjN>i">b3^r"3l4FO*;P)+;l̅dUn,,0g93~2ڝPFhwe,LMĥLZo 3ik&cj{6_e`9aͣͼEx߯ZHDJM5!vfDŧ ܃]IJSve >(˄s~ *-YU6{O5{iIs`h{]og|7?->/ܴ,+<$RKHND`J gcD y|ob}#aT#ݱ* ^D)0 %V/XJ 1ƃKb=(s=|V>x}>^sE<^sEze S/"5R^Jpp0VI#Vȃ1XWbWƃCcC.L|r&%=r{"hF5EF39 q#e8Ekd7q\gp'(%c|n,P,QBG\n=hr0;F7}1I6zǰX48V@cӈմ,-ḺL-r^ur =Vu9"芘+9/r8.+ XcIŎKa 0F:`,)s*aՄaʽP cw(E{sbk٩F( fYdLܔ`zatix]Rv̼h49 ]]7q\č"nՍ6 %rar]zjbUo#b/mq .C{16Zr7s@4f,눞0YD4zE*ӓF$| B?9N"e!fdCJ]!Ĕưl kn A&#"`h: 뭆`S\_C.'J: N(h NB2PiB;p"I:#ϐEM;&VqT8{nw]wof23]J+)yB .2MDrr r. 1hCHQ 6CAxqΥԖ3I^+J;FICzUKnmd/WZ*#/EɨtqC9ԂFRIv+"cg(%YcE x~S2S7(zrAׄ逽&Pf8&Y)qƐ> ĪUOł(Fz\zȰG"^h lp? ^õvRE|ԣ4AXZへH&ǚr*H~ ux.ؗm!>?nLpF%>vSo%,c) |I4h!c+wWz,fcIezd>E#|`l+bL AkMM[B];efWMO%(w9Nâػru*zpuۦGX~+ ߮/{s=\zvO9GLw9 +D>PAvL52DDpEm(bVJK.9(CU @m¤{Mpr[L4os=.;hxro0̑YXBA3K hFKSܻ &"Z *PK= &!hz.P020.A)itN&Xi -LcT-B:=o%x$)Q'KQ*KmNqѭ-nm|,^Z㒀4aA#伲 F-Br刓f'ȸsƛWei#WH60H!I-`](: \L}o9:B^yy.Cؕm).EC$—B3Xhz$NvJZ+D)kB '{J%JW0Tx KLD ũdr7ڛZtq 7 &'g&PlrXлA|-L4R=ژ,%rZcr ۘyf5ց8jmrTs9DD]+\ѥDaְUEʙ8v"u:&lp@]:pt`~$B+=8VYlAhpB߳{pL%Tr>}vb8?,ʻY99sҕ7noJI~aϧ 8Z F3[,_{HUCZ?' 4[i9BNޝ [I}柳LApE*v< Xa`ZLjg/cؔ1䚃;jnV=! N9㵛6i=|/FZJmAF ]x!xxpFo3`ݚ6cNr8oR5n4nV37KP3F$M/vɈ$VS ȷw- &iak*r*8s.{:Q/ D][gn!$]U3 ouV7}˞r)Ҟ{ȫ WgoZx_=t%ڛꆦ+"2*4nU<ƽ:8KYi\ ίBI;VF\ 1^|#Լc_TT]k M! NI\ DIu3V)qeowQH%6곊\ F93?RU.?8 S&/>>?W$1hOeG_"!") İ]v Dhh5Q ?lg<,]{pb,{O]>ChBݑZu2ȘdO`3cSޫHZH9gm#1V2}iȪ1½jA` 9T =[#I0 .9*J"wpȩƹRȓR> c}92e!mFv1fq>$;8FK/ORKctCeK 1\}D]au!/8o@'uQ_h>26pO9uNRƒ`$6h\>bogAh)_WߣKćFF-.a}cI/DKR3s<9!EyS*!h}cS@¤eށa~_/B1Np 1\ I[ MvꛝA0~sѢ stNz k4J#z>t8Fia1%vg zWpt.z#Ŧ;ޮV`eUJ6[A n}e Ltg%'n(L?:e@ 2贳ÚI\ Yy"rbvi\lT on5,fӏ$pѧnsf\4!rXi<$INZ&L\qZ*[M{Df"Ujue853:Э"mʜ^>9KC%K53s?gŃd϶;l +gy;r/O/X~ 2G^Yj|* zP#>2IWWi9"5-gũ$+1"N(AJsַFFU]vq=6t<,j\F ]{q5  C@|x"Q$8?>~=QFV`}%qKs#ϖ䷭`ryg_4F:b-iPy|n0rkppݍ#(F܊:N~Br%\71*LwD_d^XOۼj\&_ʅ]8X}p&˱6>4J~5X?_iI??#zěXxo;/ՊKe0sK Z VJk )NGXk|K]O~+n,AדK#0`Z|Mi|bM]϶aJh;Ŕ33-){U/%*pmBJRq<Ú{SbRN0]:!% `%$f*`g n Z(H";l7oɐg H2=Cc\zT0=`u<*bjI5\!'*@XGrßƸj%-IʺjZSSzcL)P~zx]Yo#+_Llcp s/rd-$w3>lRUiibXwsH%G1qE(9ٖ$}f 1( 1Ѕ1cqiaXr̝N1=+Yu"ްyymwo%S~y Ƨ3fI9d9~S?\EH6_';:?h1IM8&+xʰ"7=obo8z=*;ڸ`&7W׵WW_u'/^vxy l4:]-Ǝ<DTZC=e /#k p{Ԫ\:N$m&iʄ!9ZoEb[e> P*8()μ 9k!Z;+=%5kR;5bk)x"&BBq("X!.( I6`X;MZo݀d1CDkfX*EFl:%k"Ri>B)~|'1wJ0!GqE8cɠ\ @jp֨]٤Xg6'_:*:jցPrzʂi)YǒA%H :*uTkݜQAg.wKAy2cg[CxMR 1HCF(Q, [8%\!1,tX4M+DZ<%Var:_8֭$IJX\b)B ,PKNT)#IbX$h$ZEe<D't3Df`P ;虵>YQ!2QLƂae2[Pio+ bgt9Ź̙V5&'dnj[!4ĭhn@sfA^bް3 A il.=6kc6U{v<]U/BRֺj}5(]ťJD&vuI}+Q?%v^F%<0ZMxb+,a RFgFzw w<0Em<0eBa+V@=&A]:1c[IyBl!&ˮ *yvާ7T^MT2MM+30wTrmѻBr[| EWR-5szcNsѩtT])ϻ^zC=Y`Rg,{Ia,%;X:fq>| ?cors8/h8yx''F^h^+UnvhFYkrSםkz{ 2"@翀Ļǀ΄+#PӆֽG.Q%sسvI7U֑Ro۫S&8jE])m}ZD# 4fVJ}6I1祡Y~ϘyfyqǂRN@4nH&T7IǡRv"dWrKպij0|ij9J)&YP |9cEЌ9^0M@KiÚŽga#O_,}p2vo/2|yZGLĚvPoU?Oph5!-sgwr0\WiIDHX<TswXy/jE,~=B0^^C Q%x?D-z( MC,4Y 5^1]IE5ӿdի1})n8?Ip2Z"TNLYcxRv_T=;g^*o71O1=<ƚY?>ˏC'L2e)wԘ.Ř: 3#.e)֤eOzE[jef&]KH5Z{P=H[Ux3ķ[xV/~ _b;+rۏ4'(e\6XdL*gFl|y]C {)+o/j)zûw$KwGW,5uR2w,{&eD8Ӻaϖ 8Qλ+;JLsXJ"lOS\UA1tB.d+ҴE9oeզ7}&@DK ߟG)&xI]˧N~ ĵ.'e\(܎kҚfV]mi7+A팎`2 `xd 52Pe [ㄡ7qhZ-9|SV{kp;IQMP=V:z }i"=AI+* ӣpqh7Nk5sKȎM? $ gΈoD7 F sU)Q0zb#\[ژ>'bDO>A rWX AjyuiX >SG^m/C]gM9 {F=g*хoPi8H;kB[@v4ޯL^H0}W 0Aj9qqv}顔BtpÌӬg mް=ҵHY eLri%I O*6̗}9xj6<Ҝރg,vI< INQ\r1Xiٕ"xn߿ܠ=9;w|Nw#|^݊ӽI-Y!F= uQ7^_]gZ V8NuN"u2D32>pPy=,J- pptN uB2ʼnAqX+5XriVbM/"?o.6y̮۩y|]jquvvl7<;) S+qC.*+.?G޽}3'>R> ?\J l:~ ~w(^Oo.:;O㔭˾C/1R P|/H$Gzf<;׏7AP*9ה-GT+.lH1Dx‹$al&}sxKxw.Dkx톩4UxQ1ۛ 1!=5]#ר3K.ǽ,0Noc3ml&ķf 9VNiR fDzDZ (0R,97Hίq=_Ah q: Dg?}ɽ_?ݼwfeI08N>dO=qx{*EQF Ţ;@Q QG` \)ec+wXE,TZW;f\ժ f6}i(` Y'P$N\\BՖ=~unσ10dA[e@1|̧9` iȧl]ʷ IǿTmzf: 1bK^T[?2Jͷ{?264?jy{ Zbk;.wwS_c"jzq"NA![g˟~o6?w3-9I=!𷗝 '·]w0p ٙ C7ZS|[[si=&u8 TGA~:p0iwr{gݧoBeXF#R 㸖dODgTdĹ\K̲gNʲI[.WD,yt4,C$uT>^ҫXzURa+ڪ ³҄\Q)3)rasZϺ/]ĵfR0ybǥoAtykaHDژou[N}[EAՏm]fL[ioFy^,ZN)O)ZFF[y@%+d4)gXobV nhp\-WӆsX > 7+MXxa^6E0STrRPF #&^ *wT&J+X8b7u'o@.oPqBEJlvSZ#8%VdnaJ 8B7x'%a=u:^nt#z^QY4G4 6=ʼnrOf< ~/ګ }:J|o]dꔻsRTA:|+17-ƻ*A X*LڽN x,JAƤgH= K(z::6Ӂ6ՁJ3f:ᒭѸ48">1 Go`?+knH]LuÆI;]),{v00"R7 qtnqTeyTfUfER3 0%Uuޣ6Nc7S+qrHeswWCPI p;|M$^;U{),vX9燞UZܾk WvV\\$9qzp;- m܍@p%dmy K &b|rxUB.l'\Y^ 7Ho]T*Qnp=@ QM[ =wqe%%L-۷y98.6Jf:m$"/T^@K2k^W.:L&wd0*bÊ9vǔm/ qcU=c7ؠU.6,,. /V۽r\*:f_ wXۓ.0W5" =, k-gr16wqq#Śs1`@]G ůF3%,B!cXF2ZK[{5Ңk 2WbEcW<ւ;ax )E ;"?KdFЊ9(bon[C[}A;\0J\H)-Pu׹aP^+y'?tg랋"=wi҇Y bL7*yQ_9Kَ«Lm[Oz-'DTI f4w+od+L' 9,A4Fㆯ?)L ز&t׾hf/ iTS# _GUbB7&6(H0AQ6r+|b&~E>Wku<.yeFjOR%bG/|eG/q%݋V/uy8mDo rׄp#pY"Tc1f 8yEA5IUiց&dG"=u{R)s xu7IS/GKe^ ŔljSYޢǑ lJZ8E;|;噼žu?褒h SIA(:pN-I+| IHq"HUS$wsqt73!m稚&FSyySVnvgZNe_֍5t<5]6;RlFT&p?E=e־\>0槮y`-IGU^@eZ6yy^ d2/2#ȴ k4c=9zO*6FpwymsYܛ!. |yZ./*߶|xw 7Fn%Q1gCfS SltyڇBw}վLې$[{| yʚT V:$K]}rҞQDi֡d ^.ָ@-Mv\#S.R=Z,mq_,omr @&މUD[[.W c$tp!sLvW߱72mv˸*HJJ.i|߼텇0<Ѿ?Ny띺;}/ޞ (`&1dB(jB/w> ,s@޹H=h)pj4WyWSs.hj$/GBzyF# KRVsX|=2.`~iDà|9йpe÷~[ٳ>9FU/SȜšn7My~6 c,0%U-kF"<}L v9Z&K:"K"2` 1*_1jc1x#VS:XB#0s`V GUIlO xǓBgh3ŘȾ>,I Jz'*ӂh( %%$:8 @$V Dy#&ĕfZ$P*f~s> e(LqAA3j֎k$0;d^H!.1o'E2*]>TZf\[30׳uc}o73 hMqXﳙe$-MoӤ@ { ~r|_`fDPWbg!6 A`¯(92MRmz09x]FKW<4R\4(bxQ8Xp`3Q88 ^=G Q9$ 4+MaQH "Y@GĝYl SKnjrWل%4OK]=3VfA*VLMξUДog^=c."gKYTf,#Y-ZK[ȇkgTFVsRv( *1VanD ;sT xA1/@ {-@THL+s#%"(JVZ )@Sd%j5*$sUXUtdJJ:t kgѽԅBwkx^U:.tcffV X-I4IpW ENa 3EYͫA;@a aÇLuz_ju^,|Ek8$Xi˨B*XE*4*@B7a\im%Q).TAV@BB$KGSeZJ8b2FWA[b3O" ̯ɘAL+-P3-) JJU!-4%c$ ([ -D8R ٬ Kg\ƌQ#'1Ʉ@J)$R.4˝4YRWD//S9z%ӂԼA.VJRƠ+$U' !}u ~ CDs]"iBHǣo AQ+/.EUK$BJySы58~Q9%lZ  FLN@QG9F|'R-fPLW)JT9g Z1CN9oSQN8egRV9k`+3)ۑ̂m`O{~>FbR ^#Z>ѹ>%oNftC@ҟp=0'Cgӝz1E7Ѩ=Ij}< tު޹^>KIՐH4uܮ'چ]OKM7ojb~aآ~{]~ٌofwH}'ULHkKED0DZY:zI3Zv]hJ0F1y36б׊#LRkfO3GPAL7oxkv BaB6nfGse,G$5G$?U);=Snlip7irtʘUXvqdO0ֽ=f5mtfIo*a2M^ G <G@Htv<~-߈Jr?j77Q#Ғtx;:\fܝWl ]q9 vH=~:*nUcl @h1BӶiCgݴ:ڃ`~[Aa'uhtj$aug* zXHC4mwI<ڹjdZ(RaSU)TJ$t6::ƼZ[ [+[`=,*&\b("Z#k 4fil/^jTB0l;F*KDҜF!c{0A fq,OԤiH(XRcr 641%5VBZa0\ Ri9'YI$D)(e,SMJF`gCre7sVʺfHv,+_!@+̏ERH`LF4T|O]+BO-+BUiڨ#dZ*tBՃVz98mo%lRI] EaHP*qXElZP*1KmHZ5 >~͕m)kFּ4q)m=u?lac=  zI@O8&lͱDp̶D&c C-{.dv"JMgT4/ֽ< Uj) "B7`(tKpNVǶd&GR:=>%NkZffߌ}fͼ@-y[opk+/-In4c@5@]gl!4%đ5uXj$ÿ/8X1r54*.c caPMU0>+QRD[rGqq\#paצXeu`AZr__-t_= 3w>\`M-|"0-Hy7@*/nji?3P\RJ  czv9ZY&[B0qkl#Ropi$+])oI](M~a% mRgG"^uN Dʜ]֡nxK,VCFlܚ8!.ʽ "QqJ u PWԥ> _BsђF=fVE?׹ إ950R1iEkG 4cJxېkj6ՠE@FC>8},YL19^Up`0]Dm1F54cRӤoZI3^k@U$RջYC32wl9X~? X 7viv+Ѵ'߿?PofK7V$-&ӢLBukin-֭Ѻh-ސ.5Be,&DŽY! A MɌJVeJ'eJMQ,eO;/3ߡ?>`?%քqCz_f C & [+f_sa:foz_T>{`\]eA?_~|tԝ?~x-Z<}hwW"0LHMrq6I1Q(bdd+zdRܾ*6ī߫2sW0Yj_>(S'X-ȍ]3`&2Ϝ-mw{ذ"k]]m)(6lV0~^τj'ZQ$"%Lg9( < g3ӛŧoHo\p50, Df $S0*H {Aq8/<0_SoVN?hWh*m!fq;,3BEl_ pIn4^UPjtTl L;[!ꅔ/>dW.!%EVSJRAZPR*.3άu a 8,*Ah/"XiIrH uLt.d; "+J6Drڂά݌[ n@]*|Нn$t1N }>NG%l/; ]H~<Ƭ%XI#[qBJaJ8-po/Vݻ }Jsڌ۫jLQqt7 61[,Ԋqq>OeE,ocpW{sc{ԑGދg/;O^-Ǥ5ȃ<]Mp)r#wf]>߅zaa5|}Þ.~%0#~ `g=™܊}9z ̶ysg;Hk؏t1=L>cʙ쭳uWw#>}Wi#&m9fF^B_sO6}c3ۛ闖URոoi/43Ɲ94Y \|I) V+a?#sĘMb(9N3%1"#$l ;Dku<|IU8$UqHY |m)|!|u fVV_B&C40دvMg3 9yRi u2q`/4M5<'nHNIpyMfZDZ'^:@oJi|S?GYp5y(Wg)\ǝfP6 `r}1;4-eP e-p0]'Aim= 56MJ3˪.-as&4RŊ>ȓx՝Wtۓ}gP~L7c!к_Kc ʥ7$קǙEoyr6x6Ya: ޽8r#Q8N> s(lo>ӭ. 8Nj'w#."Ȋl|^bNIC5{Fg&7_wPJwn&K;56&C37o67P[%7Bxe>mfoS *4;rd\ a]w`/AدRs1{ji/a5v';SvG{.G z̽*O@v֬*r ܯ]VSx.zJ%$_V}1h GEnfѢ2 |k;)aqrjmbwT_eNx:rl˷ 1ksQ3IY% [* !Iəb!V8Ku)2ZFGḅפ.hJH Q$~ƒVI b*RDZN:⒆vJXŻ6IV$C΁`]ĥ)$i|TАL*yEzYZ;f} h;2 3t2I&hJlTD"u dY?Gj`]Ԓ8/A-'mТE_/lSp%GeaZMfs%~|\F1|`fl ./47b!r"tS-˞x0-tV̴$}WT|.` oO(Ë|{C(<Bt8t'/k"Yr6zDIb)Ɏ=y\gӊ{R3ZЧGwT~C0[ ƩAR,ʘFRݻWOMi)רʴӦ%SD"fDqמ^Hd,U“eR =FdBb>T#u,19j-+#YRČ9 ô(LV4!{8"t '۫nF᷏kBb8LJiDѦ:oVN$={x %孟}'X1!_ 7'<joO]rcgp7oB9M?sy֏]_ll4 ~OrMq2Mp\),6-Ԩez[X#I5%{U e M:o1h ]h)Y[i8fsHյ-k#OUOoڠɁh~Fa`ۣ(h8@'%(Z!b .fTρFaB~&;mUZޥqgaC80TY @BAV.J SvPbHAVh]~ZޛF-ʇ[]\ZD%}^Dk#pAھV5lL~KB˴8: j>c J2rCMdum;ĺ,63LWbqNx#*$+铫2!zxEJ ,sw 8s .zBc]K)%%4W"Y)rB-YfvCP^&Ab2RJjSJV<0wzxI>'2 [yYHoKBV!Z0Og,`䂳\!;Q5zȓz1 $ dOˌv&]a-dV)wY RM#F0(b5&Gmku ;* x)$'3H3dMVX|\#/Y͈9ʉܤ9ZYLA.Ǯ TbEGf! $2]\/ЏZȵS֊Xeg*),F,َde,ۙ*oK*dAIHJ;mu$Đ5Q33I].{WFne'{1\:s?#!($z<;ʆ酚9B P*]S[.Oqvm,6WUv.rȠ2AW_#EqC ?tiU-\6 tqsZ/б]o_p<}(a#I,ʦKᳳ$7+A $!lSoH2D WL6Q Zv7{*Pno(Yr[: *rރSKJ "X־O-YYMu8>Yt }|Byf`̟!"7(6 NaCSvzT(<>k%`pNj<Ĩh\o{R PJ~^ ^1A/ Q-Τ=,|?@^B% $!u;@idt^1_>5С C Ŭr \VTL\T@E6C} EU :[>> a6A뵎cMᄺC1Ȥj#qq8q$ P:5nFkj '|kyFK۸O.(gk}:>@-0l+X$ [kѢ.szi*Lw,Z0U6ʫ|9}4͚,O#IxP'\(#Vg< wzVʪU=&ipZנZ l`NBb2xӇV?NC3 AkI9ď9I -L#E)Y7"t*+9rGa^[/ b8h\ }7p(0(8aͲ;v; _ҥϫ$[N͍XrWoN^͊v"aWSמ5E`Bwks3`r;\>#.u'K(AXg,yXh7Еc5ҟXQ~q kĘx_JQqqVmԀn|.jdn_1o ` bGм|x}dԀR\>{<%2I?cvlzP=1fsFk=ܶԛ_n W`@kI{OuvUxKA7CyzylaxdlΛڽpke3V01vwHʘ,8Mp#iKhRUASA٭fʒ,8Ap#qw?>z)6CGwklB_(Ҁ1[}OclNR,xd[PapBѮ;iH]3)!u$ձE%oz( (tSy% IH.r ([D [˵86q dQ1wą9MQLpqdDFN,<:?ucxcxt?e9(QbWY=P0 "J8=F 21(Fx9w5^ceFV'f<^#8|{JHgx%!'D"7`օ""}7c"Wx Xśzi/+kátIxI'xY`9^[O{7^y-/Oa< )4E{xۯ/ɻ^Dқ׆K7³dOxpG+p.^dZxm:biK[OxEt02K_3*aĕJȨ GQ=<YLߠ?LgT 91aP-¤BXXP##08<MknCp !b@V  .3C)p #%4T\pOh>yo)nya"=j…Ŷtp1Ԅ#3;j)4jh)zWf庺}x.ߩ.ZfAm W :]oV_=h(|^,.`IF gϟ-*MZF*%! иNtB.M HɎ5!Ap@ޮ5$p9+D 3"+t@ u0:ow(s p!S<צ K߆a.Qo n ݸ:=ɰ r=C9eD-]`^??V2/Ui8,cYF_ٗG*o2h=}(Ռ3_ ]?=`?sFW:_jr(ZN *%/S\"L )L$@1 #C/oݍ;[>ӏSv+Y999A<,:d N\fcJRPf@ 9/!]fa`27^ -<-@RQ2H13I&eZLBвH)#刻CP;n*H!y4I)-vc?>=}rGϪߗl}75,|ج7ߔke9?׏+37p6o\ot9[6(t~_]ߌ &@x*357_~MNs-^>(0ri%e `Aq a F2gbZ+$k)>݁P~):d'k&X5ǻ`:Jn NJ}FTK7f-%0Yg"U2)y*P2S].`H! p.39IEh~n\PMbPObߗ/֮M TG JX܏zۄ2x-r6+FnC%քll&`RDz Ȉ^Zq*Q>y!8K D-8U r>ζF(6ZZ5K@KMP.-+)PJY@er!TKT0(L!PCKtQ9}ZT.ї'y2$-aB4/bge?lm^fIK)XJ3AHya,h3VHfJ7P+Djuix;:Jh>] }05 \ j%L4!)D&,W "&&9K3F$Pw)|eTM%EC)R  1LN\ 7&Y>WLG Pk~֒]Sβ-EB%O:L ċDYC,O?d6,Pt>ݪ qF Z9U}a?te.kE-h'%xXAءypv"ݵe5a-C= ̆#NTq?iTmlf:s AYn1GoJq3K ޺-2} Tt|;nj=5iQQûsj'<^W?.]º{J.m a%M÷=)S{޿{.6X@AVV>qP=<ލUXPo#?gӏoǣtZ1>l*9ĦP,_=zD0 h_$LBDQ-_H!mxWuCݠvi<`2egٺLJdm77#N. Pu}rˤR !bćKb!MLc,AeCx,]`ڷ>TW8S`1 wjV-KEW"HRHYH=/7*h {/N^r*9KgKnUW*>n4)? rVN~Mjt3Ohw %?iȼdXV<5Jb9:U/Mmݪ񯟀@Ѹk׋%߳r,!4$YIA2IA&QE'5uYc䙍,p}uÀ)zpwp֙-_|x\?%X5G?W{gֺZzl^]^>-f @A}oT Qp!!w4 <4y h^jїFv4 %hmz~k+{7HgjP.oz{ӣ]Bl~/&{ ^sYWk{\mWOK| 6upunK|\LRܹKI8zuu,B_ͭ VK VFUsć{uVǗS_k᫄ΒyW֕ prrr]c4#]xrN:u|*|_U}ExE;}v3|U"æ_OnycΣ w)ZkFA }:ʏNQJg_W߷kaPF3΄so` ]}uz8cz7_tڽ/|C< s2rr1V"KL[-mI6)A"STUuumEh9V`5UN.) B/Fłj3e)L?BR7[|l iTQ]lպ"*-.6~dݔH%<{YuT9&,y0G™R֒QD'qރGL$oKTX`:÷%r~׽ l"0P?ǐ ͨq"/9Z0d%ւ"w^!d%EV =Kp"-fksȌqߔQSݟ_T-so_\)IòX+ M4"AVqAnAT{I5%n!#͠&i#?Nd:?KA0JUnhzໝ0]wG.}|\w3|ao7}g0t"GU ~^7|{3v~3攧Ja.Od*DȘd&LBHlbx# #ą]|߶n,xcw'cCT MuU5;&ȩ_" 8,/ESc- Jm/-["Lju{Z-v6wX*ʾv BV]H8JɎUk,xc[!XUћf5e_&3Z;-%Ϙ/3[ .^mR_R,"eMv~w蜵u F<'^Wq/X\8F^6l^^ /2NB``oyS#T0AXj'xd&HbSaA ҦͻߞjJA,ǐժ3_B5u9{Ǚ cN$iG9/m)HNNb-^N&]1rFh:j`'wL MLs[BvDK-qSEFՎ9V Bg7*ƌs+qO? qd3E6\+k vxJo0Mud6:[, y3tk|s!v$௰t9fY2Z$~0 vCg8z}NoVьܟgi[>S9 ,1F]In3^~4~^.}"#\.]eG=/I4: %M ™LgqD^7^^|yz0¢uώD b;Y\0ܗbTMD,"w\W%{|sBk+E9bL pv[Ay[Ti>`F(oMUq)ךmjgN]0Ml(NX"h q%x^(ɟHj1ycA(!hBCcmDZbrbHPlp -Z٠ D)0gjLTR=  LJCG+X8@hv#&bNq1Ze{&i@rF\3VU =aF`9a!L3$&U=ͻ޽]׸Y-DNgM~mpT.}FL}o9P o n[;&wߎ`czk*O Xm֋ s39𜇘EК`0uL,nnaȆWJGm@9RjGB8Y`HcGtH :Ç~Ag03iMsὙ_*#$[L{nGf~) :Ņ;5T OW5" i5*8uՋv^<5@I"I7"} 8#ۤyR#L(ge6 Q>$e3}oqws~uM|^!l,iF8 F#s떆9p$YliO@r.J -X& $1L}DF, #v"X)v4Kqul^̓e9wP(m*h~Fճz*gݫ΀. W}6&Sऱ g- ~x?2~T+r%x`Jeǚ[ډ}g(HAF#[\0%s$TQ3go7=$U\/1HdsEisBg)O/t0] zIkb`{OgbO7`LGL#_6S\z梔nXrIBN\D+Tk[un[U Ngni/ZT!!'.[˔ h~^B2;,[oG)"uX,cpiʃVH4a2X V+iUS9'"#wEAsda+⺮ʋHb8EbVyn(QEZ\bwm=n -^AA"s8u<\_-j7Zg z,(K8@PF.s T ;x%c9 ZDrr @ZI@OG|*70:ķJ׏t۲)T%CWzӆm'(?|)h-1_8wZU ̌"~  elv!(9=I,sUe U2G-Zl`((蝽S5 deIE+΄C'\5_W*w oy4΃"]B e6߅=P3Ŀh%Ji2po һZJg{9,3N~L4KG LiLR(uR`df1(;3h}G(~=;QgyZodTk-pJ߮߷Txd~7/𫜤fOg3LT:"g͹V(b:FO!8 4Qp1!hx8-50-u1i`N.A Ľds ζsjogWFi7htiҞ"Ҍ[UcooEz駌4|l?,[qʐKŧ#SlJeڠG,Yu>Қ j.= u\Gat^d^dOOn1E֗Q9>ؚ.89_˷eӣAW=Sp}\ɠKFS[,[qJg#Hnjh=dFS cU*"a51sZ~oJ)3 { "9cd7m؜EqDݓ_eqA'z@fa祢aiySQ=GMm07%(-;zʾD (2PM#U}J#EA#9tE`11&\Mf|.")Lwx}?T9rZHBt8#(.SoG @[vHA].!'ܳ'Dvj ZBHgOHUAQ5yy(;ʮ=?zK*VSK-8w$*l709=!3ّg'? }i۫udBt x\\důg.ND9at  ǘ4ꂝ ]t.!ڗ͵KD:[?cx10$\ nԙ#8lC-3E>|ij‡2LYٌJj8[b4.訩ta"8|eӄk>MAs B+ ~Nc0,oQx]a+H`^-#Ǹnznw>~UHXe֕LT&ZWp)49E 6ogARS2&Hy _ALRg8d 'd֭y.X)$t5XpZhK†-ƚ(dQIr,MBفYRF*^pUkw?ONJ$cMoYNZQ_b2-=왾1SZkq93w$mZoJPx=Av>~/ː ~~kcNq[HT TylMŭkGy4W˺;[zPp :"(Mx-6zѼfBpUBVmfF9¼}NG~rfVC,!$ĺ.)b@&ERlL8odAl~}lAUGR3\Rh- w*־pa|HTAuy-OZM:PR+)hV AS8L`$5d8dsG<!!倥aNt43<Z Ֆ&qL$|R'L]Hl-ϰzFF pkђ ! QAD*A0ԂL7G=u\fe9u_AK:mSΉꓛ|#S@ G P8ݘomϏͯV_ҟ5o}xn}xk߽^]/"~zZ??WI TV)= +Ek>Gr}AGd(-p;纑SfAuDWt̖fxA%@UNʗhMJfzPLqa®cJvcT1|3f(5N+3 UaRQ6\X~Ǭ`Js!k!7@X ˠ2AA#NDv[o ePoY/Y-S)X nQGiG8 2T;o@f 䥲mAV8Feh)AWې,mL5ȨZ5.í]X,}2ug`e J{r)J' B"e3BH98FEGaK !!jqaшN sP?BGz߱B]9ڛ&6Z)Y^1Ƭ@K3j)J0v`2V3˽q}V_)UBo? A7ּ,0!VVz+["@aYӱM,Ժ&P1%HP-N[BK#Š>[j`4"~T02!KpBĺ%ls wRсnj>*3yu?&༖4x4\ ]oϜ# jLvΠLNrI I2tȋ|cQK\|nq. u3V E:P=V[I`%pl]a87{_""@pzx0SC~Wy%FYXH\ex nnƭ켵=A LXO|]wYU{ϓ4jDY[A&]>/ Qj hל$n,Vicsw%m9ٶ-y*^_.L9lJ)5zV }9W{.Zv\ brNV'Rܠ>~7iyy |1;T]I {"X <MTT^鶲{,wIKawEc3nNxsdrf!NRgF->|a3x$x`LdG?FîY}u*CPa8V!I\bc?5ѭ~v}Kӧw?-8k-Ǝ0p; \2xIruI!6΍7%Y6'ۘ 9uh&;jdI/V!Ll^/d=f{@+Yu], LfRJ M"12';]fo7wEr 1#pH`wC'B=F8@*z7a9ct<"2ύ5U岝 pqzXcSu] \bH(#u9 #*.zh_=lapc KoU T)X0=A @"nLÜv3H1D3k,[=ӨHl˂ŔQ|*겺*r:eUm3D4cd^V\qwr<93ch6z@O\`zk^HIθ~o1ǑaGɌc{yvyi̶JHK8.FqsLL;Ѕ^eCぴUys-Ԩ {t#`LsnQJX͓.i~q|_͟ ߰{>vO ~#Ư6] N Z{Ic+8\yrqX~Hs:қ>;?MYx!0Ӣ xs y07{ЊT%z#޽xyҀxaiz)*xflg[AÓtD<p ﮧ,u"yBt׀os;Z>;=!vo  м/o~}ͯ_w {]~ umt%>fG=@2/lVz?[2aD ub1j#,&J[L;,48%o;֮)?:ܖajр](M}J;~vfqoO%~W 159*8I’cfCx#.AQG/j_Xm%K[wioi&ٻ7n,Wf43X q2O0xt,u+Rcg}YUKJum qRC5s4c~{m$Vd 6kj}kVAnϣ[u{dتӌ2 7`o~k.V{h3\Qi>˶F@Kc  WD,N6FT'.ZCѼt [YjdJJiG{I|%- @( _  1b@іeBG[fAkb L$ȧ| `QQĴH{UU])-{7s)iOUKwyb5%7ñD@(:x lph Qxux5ĩV-5AҪCg%,.Ms[+2 bPQ4ײ>S\(\dquۺYH MkaX2! hv`D!4>R(4lsmvu" qi=JnڢAէP8}3V |. ES{,tg$%z)\HOD5 2X. zQJIV2ԟb `8ˠ<|'@UC%2qPmFHXc?K(o^<ˌȆՑ K3yuЊy/5J!Rٴ;t+1O⭹C&MvZ!Uޠ㲶UC[=_KVʝx@O3D&A' J<(0a u2Yct[ ULqNJsEu1 *vwDai4gTD?*ܖ p3uez3Uҙ5٘> 8V$-߰7߮'*Ucև|ˌ51ufѓ,&F0ƾyrw)vb.eHG 9Ȼ#b2Dd>c.־;?C"NrZJQ=RK**G͞4g̉$n%rW-]}o: PtzE=tw$l7~eöRvSuB?R<)iz3ZߕIY~Wb) #LH!P\픰֘ⰄrΜ y8׌ ȹ%DdqHdN!JjT㻒’*2K9ajw1x.YRln%))P{)ױ7 TtNl;TjYk;WC8_Ft~iwr oTr:߁fG8F~è#<ICEysZPݽh9<+.$29zNHmYg7ou:8tmpv@&'ns][SJgom-^Z92E%Dz C ^?<3j(˼l}=ݴ_z=B;$4>j+$Xߋq5+Yx}XշC .k PB ehq} ϝ~х:%͠}b] |S_||Ndfh8+ ,pkj5oO ?kh|!)ł|}n P_zzXI`2Ӕٌ2LeӃ?ǿ|Wɻ<$8KQR!wɌc3su4 0Ϡ`q',@w2\X_muEԛ(ox0{_}ޢX.Ҳr3y"?M0 $=[r=BF2hJe Q4'CZE+ `r3/{Oxu}H-˵NrkE&Rj!YF `tK>8IXVq? I 4MrBh3Ib.HdTx@.nͧRΧwWG]LkMeU4ibT01F S3!$*;#Xρ _ LxmiZx9,r{ˈ&㎅7Ȝ瑪` 1*lS@!m+DN$ (b&UǹjS L8S j 'C:FDziɆj)n]Ff1-M_2HI4IF* b`PcZƚ3lR6w u*c#Z@jmjb'3v}Lz9eZg( 䩁BEP 5:I .ħy7'\on <v_9O[oY.^Rd{+Nr5 Ib؉Ls2dxk^~L//SȖyYcmﮯ{dyAҴVhz$}]o[GW}Ydga@Xqfv1Lˌ!)qC ;V?RңHñf$e4f~ -R \.%ŲywH5^qz2s"م^D e($Nn5QR0CX0-הȕ 03p4294!: T ?R`Q#a*|@!Lj c }湲ZJTA4QiTj/~|td\pZVfG o(OEdR̃ B)whebj?b4H3[L, _|i K%b^y33Wf1Mnוunf4F'9n)p1&Z֭F+:@!x >qT IP]i+75ei%WbD}䴖@siF &INATzFLBsKޭ]{yG@3E,[aV o$)=嘫1-ɕJDm&ZH-h9nFȃ=p+O5vMfnү7@W*)%g/ ,HHv* 5VKZEqsTsتXv!Ytp 5}p΂Tl`6uP[SgE Aza{h2]EJxGZ}̓qN"nk b&yΪɽwQf fR@ɓ -?Zbrj3T+F.&ѷcU7epԵ4#(!u  M!zz͟DU2-U>L"㕛3KV"}rI>DW_6b> L1XRsDqڣ3ƥ(rp_wřއGebdmXVB{>wSns5'3R؂E ,,<(.aN ׳nK^A(&L1+0)s:^_|N3/lJI[h8^ I7)#ռO w;*i5O?g؃6!\;ϙ1^ԕL2R$/j- Lr6NY /8yAh)OŃwg3k4 x_|Zo ++*ݤn`7i`Ѕ)GS%7N3+*aBO~g LsFXߛ1p0vqV緵2ߖ_LLϕyʹ(6|wz%:7F@}Ktγ+@|t^@sL;,ߧ"&1Sr܌>*5o ƽAk>Y cSrOQd1nqr-荷y;1)BLv+@5oĺ7ޞ-1NrNDP~!|`'}=0]CpKZmlΙ8Ks9raσeGZ6"!?d &a޻YOZ Rj0-gE¬'MBAT$e2UU;"aN:no~>(JX(zMe޷ipm6E{W{OpvGï߲)lݻ-}ST$mv_WֺˌPG/A}4tკ}7ҝBrJ7G~H)8חN!ǔ/)t ŭK{n!QK7݁"SҪ$QHm<&(:YP",b#XAnƼMBl  AH-L(JhP"E!Sv-h, ZHqq\׻|kh7&w'_dt{ K6{ӟ iFVn)QMp{ ;dG1b=:|@e+(ֽs4WP$RօxHgK>*CgwYFc$@#ZQE VEsNs'f*:Is}ֳ9UT u8jM-1EH" DKc,kqgƫk8EZJDM w lvyF8311'#'QԈIRn8_S3}V^8A-H )1:.4sB,r  2O 7al.a3tR.ޑ0 gNpQ6iWPBRH A4L*-ݵ9)cuR# vV97vUQKZo@H-bLΌ-{tS4c ]0#4|đ!\ǁ(H`q@"JJæɠad)ΡF:\caoN(q!z@nQJ~}0X =ޔ4+`cf#UU_,oqgкd.Y9-z{/* feߥS>JIa X#x^-  ?,JHwwFf6/5;X?.ch&mńۃ?a"ѥ  |6swCvO8n4 {Xs6v42YF%LmeJT.<㸥S8ɨ‚6`,i%{]ib8O 0 ׭L3rks:%p\ǼT{?M*?S7- ~P`:g|?ȗmx\ٺIlnS̗+J0D˱,Jb"6NF.dP22#$Hh^E/,ot˺m`9宄i N˟pTO&ˢ!OE1L4E^xB\wpY ] {ieDԗ=9|)tG;UkSAŧ_g 'T):ʙʊefJ3?U@H_s`ʚ=j&Ja}_}Y"Oql6߃ixl!GH وK#7I*v Y]Jg:4۽kfauڈ,uY-$";$Q%a$:Rj DgHtl!Q%q $Kic][JyMriꎪY kric.h *-fU:\ +Flb-{&X iߔTJ䴄:%E*k91IA*TCJڔSZk./NLa3`i뾮n,@ Z䉣ָ\P%La _ N'M/.1jTmǿA'v^pJrD|*py7j! X+mbI˻'<|48QOcqm=Vތ8,wqcEgt$o_FdwM,K53;rU}(o(jd"X-U#D5^Yʫլ!im֠(fuDC5T\k;FfuK0͐S3a:_349M/#j/bd(hYsZU4v\k7+x<ʑ3`y$iш`;2\ّ_(}IhHn*0NfV:H>UPuGC3p\rASҴwszV>Մ'AWcYDZ Z?Xm=7Qp¼)H!YGx3IPU%}E;F/9;0L{l aZCeM)~خ)M6(+:A:$EL7"z[W3MJ5;Ղgz7@Ӛu^6}935T 15L)*wmIW.f[d âهQkX$FXH"V{FM2ODFD^"NdS" "H/B gܙ3V*p9Z. i0&pH[k7_{nS&"-ll #w5*DR3NkDp {\; k'U p'qklJ'lEK;e=O=8$˖BL%_'&A>[V[N!0NI2h: _-FzS Uj3XnΊ].Q|.O}īߖo~jtEqzBﺘړgLH)dcKշ4;'Gqfqp9ʼn݅3rE~ %iq03:cY%dgÛW[喉x~@}?sd IqZ@I7nlh! rZ@|C߆$dN!V`|( (v"6C2̬e~ǎ x51^sz;zղvfcܵc`|Bķ䕸 q o4>D(̂XD1S$Q#Y8 QNNrsktb_yPc7\0\s6,1~qM mZ9VkZŵ/bxk K/?Pu›-ǨA&sۛP%p>3,'r*,M͏5m߱9+4}~L*2eZيz^ii)$!:@5מeRx)y<ّqmSZ>({b8l֭-MT;vn )V؛ukкu!o\Et4)s }5l>?u!Ri.<`&]kr`]=$N kf>n3Gn-={ϝp?{'!VBqw#QoyǐAsq]iMi 7w'SëP{_ ҶNw5ҿď!WzCXEEnvޭ'6pl5X:շFk=^{!"m[-c=9w< .Jً UwnPt~~3ŝ\ABB9X68z܍j=3vxXg 6`b) / ;Pʝ{@Sa XY`hQ$>O\i*-uK&YBM?`$׵ðReɲ$Qr6h='1!z/Zytr{ڇ}ae)~ʶ5Y,^7z[8tTebMkL=0uW?j+&Loua0MF-u A4v謴cC&$aQpY95[UЙ o@!S,3f!p>u)̲'IaQc;n]9E.S_6+ ۤ0E0¥G酱 "ZN~,j#Q;,p-6ٕ6e,vLur2;A5z}>ي^5B%C>J}2S,K@"{2p9(HVKMke`.t c \uXa E:Һ{mP)]yרu ]533|uQ/+Y7ۛ%XpiJN).rnktyP=2&'ä9fø,x+i#Z) envI}v=+"!H c[61 CL6Īc~DH[qj̸1d NӁ}3W xk»؞z<1ۃnxj$V"D)SVFD;lX^?\٘kz nQu_܏ y6c| h!Lvv_n+co6Yā)/".ѽ'}޵DWY.vZK&cZR D~FH =~[lş #kBּψo65B8*"X!jN{2kxȠfkcBQ/t'x6 pexW6kMmJyv?w# JZ;P0|Z$-y`Ά_~x9?%ߣ(x'8&2-͜@=h*[ܐ3nj?1o?n7)ri}9p]hKކau/E3SpcZo) H%OWWVP74 Bom>QUJsNUrtSe#f}SP@~(5cR'_]|{}3>,r?Ǟ ewۿӿA Blt "U\$Z@&z\AeL&*tR )<`dڄ#99eAǿV?}U_'2EZ<#vZQ)jD~-F+uUjV&otQ1_33SQ N(p'wdxʐ!2bM#55RA)W4 +rtXZ`O JYS4A'tJ(_Q|:0ihY0ԧA1G68Mk֠U kERa+CYJ!A Hc|2j^;ZDA:8 ZEZ~oaTJdmNA HA_ V#;Y w*UR*WQqda8iMA@80*+o9f"dR^YuVFszYpòA)A1KZ7mls|U4ǥ9Jp%"ORH26$3nPuP R:7 ?fJj elq9Wʩ٨59U!ȩ ANUr:ANOQV9 h]JfD[D6KeLZΕ3lgɒ'Wq}?͎Gm-m]읣~6_F P\ X&VB"*Tr ˝yO3#Ϊ-HG(ecKIN%c1d֖bxN 3ͺ] ڷB ھE38E;g#I"V9&dVX)%Gb*-"z 9q A1#~!52 td<ܐ2XA,~)U&rb:n:8`\`6*,LpO?IDN_@.EFt0aEƇĠ/8vJzj@VJ&?zgnNa 1@ٍ+a]79s/k4Ql#́l"mKFZ }yy~[%ӗc2#0;dhy-9khg3z}R74>dS3t7Loz1 v I94Omw r+9ZuĂεY ҂ε\~q/ -Ck*&G˲UZ'.!-ZB=q%] ]#"!x…l.xNt+QA͹']4gbʁq@ip圬\++O J-ܯNv=vج0]6l qh2[Xv^pu 00 XU?)YG !@6G % ǿ{V0n?ɧ(5Ѭ$JMjA8j7Jچ.IbtM*v׉JӁ*zwJӁJQ!c,L(b)df%8D)* BGX!¨T4!4*Ig[SP `U"*5K_hTq&\0Ȉ=pGq컹{K.V1QKJ7M (j>LS]&:muX6vy&Bx(T 0X=/S9R5M5!RPV:6(>Ք´*hwu/ifVP7|Q2ϡ#Ąf4W7g_m0[[,Ubc'?0$EԈ:eG012"MGd%W cG{{\BqȣDwV3+L"]\[GF KWd5xC %ĿjD %;EżfIȰe}Zvרa|INNdkԧZ`ND_RMh/ OB{u37tGxzk\u-n&LeEMUߢ1w7}j9ü%~@o0GxMi!q͚Իfc^g],1钑c㹚h棇B<ҳ%o߂a|)~HW.Q2pc{vW7*xVX BD'UVkyfŗ \ݚ\Dcd" MzEb1FvNļ;-fր|"%SDޞY(b[,!*1}$b^ nMHW.Q2U2=uќ}4_ռP$lϟ&frjUBmLPXY*H턳L`|2vB턳Lu;N`zvvB33uzvZ턳LPX1Fg;l'498PN8 J Xl'&fb< g;ə 0:A;A-vW`'2JBjosI_}- jKں VZ\f?PKxt)i s]n 쯽 Wf~΋ncYBfP=1ִۗ t k51D7,|բlֳŶmA$k̊z˹x]ox7o.iߡn77J #afi;9&Ϟjp|CD&1e$q)\f'4<ؙ̢'\ #5!1Qv8Ԓ3_^DOĴ9'$ظ16Ñt"$HG?#tm1nq ?, iX1@)D(7T hAL'ZIzJK™T9}W/7,/hՃi W8|[/iNayw _^wx9"(_ x6_8i2Oo/-z-3I.H??Mp'6&݃˿bR3T5eb#NŞh].kK5.DH"yWmd)l0l/`,?߯O;AR6W?E\[z_ꉽO֭.xx+>7^)*>'=+9ĉTF kX8i" BbåiLaUgcvp]Cխ&? 0^_}V;uF؂N#5~)7K zˉKx`sx=?cu Wj,.o>%7yz/3oBfa U-#LCdthkkVB*ؘ )hJX.20`6ΝQVQR r6!)NQ*%9?-"הh Z`py\Zp9\iUizdj(uY4Ygs-coV] v3Ir>6F;Dis$FP-)RN AwIySB9\"c,Uʽ v|C{D";")Dד##]hRsLY S0jyy6$O2x8`IxIb;ZHG ~KBԨDuϗNdp;qrk0R03$G gjd#'Ϭ4.n[}#܅<7)SDy҄ba8Ѡ (a%#n1,% @;J˱χvIQ "ҹl",r)6\˔j'X YK`OK+DpA`?:#CZsFC(?a`|9Z5 <^{H%f4NtmDd~FHC|#m~8-|s.zJ&zyE$n D5*JӂP2>E(+ÖF BrH î-~_ol(V W>m-?k1g:, PM>>Ĭ#vL}4UVy8jȝɠ˿Da e/^!&5=1Mq^݋|d, oOQjFsxݿR1f|(p4\;cLȌWɹZ!-2mjQ[)i"d *MR_.cW"]BtsՔif͟Y(+D.$Qiy LsbV$_TR5肃=Dž|Br[G](.Ve_ъHz,( :ARVJ% _qfw{,J j8a}Y~֌6LEkѠ|F]M7&N KƜ8&A{}/x1?KsLdKhؽ(eJP=4ŵJ<t|zμYb;ժyS-Gh~%AS+<?W^Ϟ -+u!|z)Xz5M+VG拁}F"ݙtqnM̐W>EQ?jٝtrn ̐W>E}攰v̡ӇMY~/.SU[>xwtkb)5kEUQ}ь>sJS 9ƣ=?^h5>TE)c0 MJ8=;*F;a] 00 MON`NFWx[vh'4$Nht%hAONPTN&WGz [.gW3Ŗ9|仇u]6z1 #lth&Q߲!f2 _d'Xj\nwi'm&NJS~oܫ;Mg (#j4q`Ù_*/u,B.ZNE\.J}ܴEZu8kpPvC<:04͚v8:_jLٱfIXng?4!Ϧ%R׫X9i WqQS]5iGԄXu%-D^ۂv1}t_~jUMtKxts.u\~{Z6 (}!F C<* c۲)^nbO2֤il˜*FuJіwQP sĄ 1aݽ^v q?mо0T|}p{yv%\g禎Sͳ^Si7bX-O2 K$=oC|i+/PZCt5(]=LÇLtmӵYNf9]is00M)iCyS D0R͔ qx`b, 9NUیc"2Ud"}E;1/ぇtߥ?K7?SLݪ.}ǵ:iu{ꠃ;ZB#-Z yyK@X%ܦH STvO8%X^V}s.!YD37a*&VQah2YRALD0")ׂ07JGi5Ȉ<BA 7J(2_DKF8aRe8A(I1 ^g$ В Zq$1DrHIPȅ0G9̳(b>Z Д Ei-XHF:JLh!&Xp+*J" \IgKВlMimN!s6j댘fiEI vZؓ<>{.G>%E^XSzX6*{;ifNf-P19d5h:ZðVX`}pйNJGc-U1hצ!]D GֻA6Wڟv|wO$S+rg"'*V3|yD&o~|srٞ?S3 hto(5UR`V/OON,TSJq ^f`>ͦHp}_MSs*ҢU!щz.w|(j#xJ0,-ŹxLjo<[!mDNY "R36QqOXIH#5)8:hOF(`p,YeZIUFI&8V5d$J& Vbhȇ7$D>&޻ŗ3ta$d|nOwP;oo( AvX9Tq1,Ca+cg_H .lʌmjPYݽUӮP  U~3agYևƂѬ._sy$s]sՀ_'\ԕI3]a]M`Vw!AT1=L]:Zw9Hlc[3h! {Gxi|8 {m}ifU hvi}:Ș75}YS8+8j^l5dy)9˞mC"!kW(}M眲58.w4tBfGr 167ZC+ ȲXwh/65]߇~pihuh뭢 T(tWWĄ:bBĩ0$,Jk&H FgDTT Ei ED*hǦwQnA- ̇W{~qy n8aPE$cDu*N?^[ F¹2fi FT<]G/9*%d<~^gGI)̐jvs9xȇ,_m~Q eW@-gC)v lKȩ7f_&Bh ~7uEdɖγbm ͇fuk4ߒ syA,hM5{/>rRJ'RhʧPLk}XZN+p<)՚K:,^V2zAh5RAaABIAudi8MBt, q藂GX[hw*RDWT /UĻ0&*u7CAB.X*Wt.淏EaBfY2@Y=5Pܩ] FA0iNvH)6F+pH1;٭Z9쮯d4QIgdPkn*_}WvHxEJIU-^k;\kO8$nȠ]# ݉YB*tɴ+T\ܚ1(_C"&=Qͩ"'T&<ՠbVx*QZ7ۄfzP-VGqL ׃ٻe}&V-V[U!KbJ7pû"n%{߬O5HR"yu@[@| =_|W ^T.k]v?/X;dN=Y!N}⩹` I/`n%/VscQ'˻G_ zE!HnUc^ڕHU6ؓRL%Je;m,JwSI+ t{*d),JwQm&`wWF(Q@IMX$hSEd6_6# p;FC}[* ?%vAD`G DflqE__J181gC( 2\iREW1 PY[^zhbh MgV4 @U%~_: 5fk:{P@r9G-vXϧ UÛݫ$ڱAuƻj6y 1h0YĒ2Ѵxx:ĊY.vڎ \S3 Wܮ<^:!>X,o׆ͅ)R1>|<;.|ssry1(,$ݔ[uY(.CY4%*RaP!dbUJ|q9UGC2GD(GԔbؐI2L ΣDf) &v ( QBLg`1QJd-툖RhU@KDb 8',L., &4KMP&a:Zє2>)a*Il*#.9 SL)_bMjbHaycTNTp&I !C8 8, -_#Q&2LR1b`w@\'8CKkV*4Ͳ<.*ؓ 90P,X-|qc v)S$eI!q< SNAa"9$NEÛI[xS&4FԟcLwNtkP@;27gB-{Fe,g-7P3PIv~r6Y\{׀_}>Kaڃ]Si]狛܆2xͬ:VyÐLi. /cb44 >zg -$*_l;gyV 0Rxb[V oUx|Wpt^X?qQô?\-ϣ2}m4ۇ?5+8dd X!V͌\95x<ԨƗgv=V8Bc"VmqU4**,#>bR@X >xMTnT *bϐVgK'_svU~٪3sAu ־WS!bT <T z_Fֱ"S bt? e>+A }Aƞ! ^p»Y GC>ۭ|)^٢@p%(!UqQY\Lw_֩5nұ󾋪V &Q|x=\{]n`L]~FTz}$NxJ1ݤmEAB@ $Zi)V6kOyC!"-(C;Z4WU>q1vѺ;hAbcenQ_2FGܬ=ʉ;^}U٥s.ѽKr {rq{E>(FyXq 7MOc|P:o_ʫ+ aXo^aM BPJB*A%:覦Tʊ6;~͇jb?Z5nD~UJi& Qܢ,YN`dZi$K ?@p)ʅȢPㄨL8JÛa$J~QN2 kZ]7ҝ~ p'<@ "wrq*&Z{cIK -t.*8մQ'! ۉ7yŃj!꼞[%CՌz"ڥZ[(JĪ\^̇jNJgۋ~~p F*%;~m]SGkv{QAmx;)rwmH. @Upe;[ʝSn `+ۊ'CJI̐CʲDyn49eA*&|^ &I,oc@y:=tq^U{,aR-x[L-Њ'j+:ϸKxvϲ E`Oҋi&}T[Y۬Rjց' Ȟ{.u}J%ֵ!@pMA)qu.}bߜ7ߤ' Y=nkvysFA 6^_^\aB6Iї-|E}La" ťz<3FKuńs 2/Fe|q"m$߁,Пsv&K|"7E _ҧkj֨ڗ\e"tzBwfP 3Wn|7B+җ2KK9)9# Ȟ}st)IUʼn|` +L %3fFԂty}كb?v8>I>zQ:8 ƫ5 it|RB 7l[ +fA`PA@aĸB适e l!h#: {%UO6M9qRtD*}?a4xqdfO3j7;+.{o#j^KK3n6Bxp &j7V݆8xxcZ7W uNf6mWNC?tlxoH* >'S ǵ7L!ޢ~:=8 Šѷ~xO ~9;;t2WiQ}99w6\ |"pm ?=:?rJ8DIm3Cw8.(qNråVJ_әdԴfϴͱr־&=wLh傤,4=Kp%Oud!3БTP ZNA1 )^zK42 P7UF•u5(nܗ id9 9B d<\J@30%%0J?lM 5,U:B8 +bPA (+3nMCؐ3j6t{r'%X)¬l/Rk)~op1KW1hY7k+ctZ3"3:LLe1~ʫDM=6+ۆ LDZN]#xa'yz/ų? "dZP0q&ƑE'PBb ?rF_͏.b(bċjr\yϯȖBhl'K~+NJyƯ/ִjD<5TV!@!G2-U`U4HL Gk;L+u݂j鋸NN$!kaphv >"| hF_` 7=/xaj@J BlFbB 1 Gۉ0Cdr <eTD BKNSM)˼ƍ)|na]&;K>:`.э@1YOM,_1ǎ^)6^6CkX'X}k&C+mF(JPqBFeۿ_ `9tk'TRk ]umkn\m5 6K|Pl*ErZ.]oͺN87._`-N@t+`=?3\:X>0 èЀj!4;-ezt-[.oRMt(?jE)xa4\E#6J|>Sfz|n/(?Gӳ$rJ7;GK|*QI&9?pct">l+|[BrJHNR I*!9\,bA[?ʹ>xBp# ,(R&ܑ 9==j__#WEb ) >f:Dln<(G ̪9Gϡ6oϣӲ˴^*#taud i-j,LRI0I5& uf$*"r儳CR)1Q?J7u$>{:k;(f#-Ld9R.4rR@aq ,BYR"ڪN+XrB?;VeL|Vfr,>1np/32Wd;P%sd YR*^k}K6 kҰ&) kҰ&4% ΚVZr #D J'D^ccȶNʒj;kC M$Z!K%I:IX'~.s -xgc4%B%u"hMrG=cܣjْvVv%b0ILK}=d,lp][J,;Ӏd* A{ bX׉<z99Q AIL#JlKIM2%?/ S;sq _)~{[ʻp"XbyGvL: $l%8 e wQ 2MeNH+V#;ϊCʂ;KOkI6ݓ@xWH7;L},#ZF>#%`hSM( V !&P-eQ8/5ƴҳ1 4cW;yXNuچa]D󄜇X4mR1VGetAmRe *Zj3>rRn&)3̮ I͜rt7s_~wWx|'" 8w/gݾ<}Z :Ttzp+| .т[~MloΐqrkDqU̙ξ[ʹ\@ |"pmIrZkqɷ3~HPsޡdLTi=r()JR&)QOmh7̈d,?{^+G3T;LA3sdj* k`(J (eF U2atD iWh(낈@%(fm )Ҁ ǃ7>L _DF[F=""{ ߌG1*f:xZ%9\jB3F6gъ iV\\z;WbJJٓ]FIM%x\ W(.Ӑd';͔zQzS~!$WT~eܿz$Gd#K eq›IG*%$ ?oׄ2IS .< c[;1R0d)R4@ `3&T:Өи}_#5蒘'gjƑ_Qܞl\Rd>V $A[WK eH H˕D4@wpnj;rjЌ`J#f&^8n;u۞ #/6d&rRt: 6M~ULppsxJ !΄M9N4%NUhkE1 LRq #X0Dqn0eHy` t N{WӛM8z ~~фs3cJ_P1tó #-ReTȒ$~r)L?()E$R{_V'X4ɩJa )Fȭ&dR@&υA UCH(+.$=yO.USH DZ O׸wךtBG܀'s)ܸ: 9k'}1x1u'$<?ٴdG5M?]6wzqʷxd ZưGudЬ:eq6=f4wE9o~4{x뷘SΥZ2XEu[OK¼A]a ̝s4ήheMYCK\,hJ`,&nݶgn!4~8;9Z*88p|:D0E dUxU}Vy!v6l )@`mE,> VТl;#/Rx՝d mϧ1T ~a p]0ctEM컸@g)ݩX~2"m:d) O\ ~^g fܽs1hʁ>f>؇r3݉;?hۏ_>[br7{I#aP0dG lGrrVÙ-cak3SW; cO4?Hސ`==ט+TM;g|~^_.P^5jӻ^ j߮'7A<ƶ\2 :c׷ HߵèG滙g|S\!c]TI>e>"* 8adjvd 5":/Io֤3*~A.鸴1VTIc7-Ox0wvµ= n:"+ Y22Ȋ^@^t gcyǷ=u 閂wቲ:_bޫoቒ OskdY?Ĥ@eG_W=0%}8+nk땬Q?];> %"vݓy<# TV P#<8fy33oq1 hg }<{>Ϩx~cVpt>yx摃TL{fCD24ɞQ"hΆc1|IptJ#xPppJYq28xzQ]cX{^}%Ezo {gsǭT 渷''o]g!g34,T"O\";[BU*Kkirx08S57<ƛi~g02DL;]_On.!C^9LQp nCnpThotZ?O(DіW5hKٛJ3|c @J8F +D.'@j}I$נ(Ze)Ι%(qK?Lh,b&QJ5 x*T-ݽ8ܗe4Uѣ˔;A.=(+F)bedF$*dvaR #MbYM%1&Jȵ\BȣGw4Br #h!676Նy@T#LER57"pP'TK,p\?{{_S@b>>+%16~^g fܽ;02Ed6YFA+w\>l>ۏ_>[_'bB j~Tـ_ÊJVkXxy$o9Cfa[eVXʎxq4oI~5I28Q嬆H*a葜u*s<LKFVѪJS q*E{b iꌔQ?Q>|=&.Rq̓2Lf>vv>4k=f1sn[+I9Of4UE.!$>zZ.zu: I;ŀ:@.:"/x  N[ue EI6P =ETJhN0AQ;o*$gCL0 (xSz^8-+<~shEhi$t.Yn@sLM+ 1^U.[mnǍdLc$jsBƹo1qW A\1)h#6,ZXbz2JA?5ی[*p9 ! ?^ւ3|S4ŹFxLuQ6 >dW7f:In]߸u+ͺ齃k|Y9Vh>U h˸T岂6`3s&0X3qdń!iuf`i$DB0;k>f3hGG”k+#wL??sRC E&GO5)sn3id? KVsZtK4h]Ӓ>T-Q7mDTKD9\ɖ:$I+p;.wr>ggRB3NAs`W$ E)4 /QB1Q#xtJ@0Mk{{sy])9oRL;"kJHG*p"+Aje^q@g?~ [۳77kNOMQg߯3A\ n_?-<[}|-Wa f;|e"m,^I>DSI+D<;A/P\e)Kp_qK sp> kWsĻ/v36QgϟS6?\?'2z}|ٷiڍC߹۳w>Ăyb(.ʙ_G7M\Eznl4G*IhX*X*p-p\K6oa%Mw噘f~ŠD&8ARF?VZ"PCYBN Z`&-6uP`E5sBX3#PꕂH"N V:CpiU _133le ]EU&h+A/11Z4aHPBQZ}~E[~RHN=*`@u; (m X b0a/4Xhn Q<>Rsz2cNӧrC%"͸XX& gV`Da: gsd"o%Ⱥ8g#PD=%qzz:D=)r,*$q勖^D=S*m`,H:LCL4b<Mp4Lt wq,iU޲"4EhVn ~d+@%.N;»Bؖ]Ji,{`E~Is$Կ,S=0idG6WOf1(thrW1A92[As8?z-6AJc"9pgs/V{9(ɞB€`yLN 5@10+akTi魍I!5" ;KqGΜgy]&H:>x&hpYoB@bC mS0Li4\%%JnTҎNK;hٲ5Ąt 'q7^ ̪Y?MRhaN0&&&ձ`GhyTx4L0ԖٛwggL=z$hH׎Z}r |@.gRe0:2Ƭ~VޥAI\,g$QzbS(Ĝ "qm[Zsdk 3J 5 3L|4䝌%w !Dq=T$Icm~-2)dٻZл+w?dî1!17 >c#yz:v})tbA. ^737@ϿO6T>UMCY 7"d-5nyExB LlS%n/n%6)݊K{KMd\p?Ê3OVJ*c+ͨLP҃q23(jZd9K%L٦D &$d* St;s:Ɂ|zS.L*LLzl^IAUTEb-KY?bs-s}JL@u{~>fØR 럫ydWr:(T;XiH5cO ,6T3f+W;`g.O HrB9iJ:"$G,ިN3yP AMks&]^*w ZB=(`lurM4siowēx?]n8'/>0>Yğ\6E~)nTp4T1@TM^:K t9fQ<'ˏ:t\q6ak f\es{[ᬋs[KlDrf,X>&r1blh 'w"N;jW۠ЉW<1pĜ:"xX& 'K$X"B9qŇKhNlSIOHzϟ?]gGZ4 x!3S\S vZX"U E+eP Sd>a.,wWOt|vH+OW_2;,cRE F)]!AS(GgoyDc>JYO6O\0Fk) S(iQm,<(1z^XwQɫz\!/8(ADe 6-(Q:=tVS30/NdsP\_: /0y%T V\'B*@ LX*hq1[Y7-9na-50aN4 K6V(q΍[W  K YD['Aw0էkA6251AFd=! ˰kD"@0pYvddo?)Ʒ T!)摦sťR h=Dd^RIek %xP{UeQ1zѨ p. W)<4`6CB@T:)Id*L1/Aaalމc9wh)4K;{rJ CP%[SAl{ý,J+t*PQ) 7DoֿM#'w, K+A/c^8FD >-u𝿽Zm}ݥ]_~w`iyg\.+yeoR߬+|HY`qxJ3jǥ[pf swJ,)=-'0Y%9 dEClV]1=2^8^bsZlc(z. oU'WSEՐ|3 k BFn2$^'j =4j?a3ap$r :*%<;5 tŠU`'taeV#8 Np9.3h!$p"`^BjuknJV [#Xes2LT0YT-,@1QP\A,:~&C"T(rx?uXj%N 7%Iml%+ZԂ}zt#&^ ƒ'+(bIVwm5K'6<pqmbo?kjIO?G39o9Fńc2x bo.4],k~țyd]3Ä{3#̾R}.Ij- YSMM9 ڸ!Sܼ;b7:S2zwg^}Br5m;y"?c&>c#yz߮?_ޮ@J:x̃d\n:Zat}Aw>Է,w3FB0y2L6poSȓܯ9GSGPOq㑎ĕ\22*e\"'"4 OﮡĒP?8-WG25,c f0:v. JdIJj`+ҔT3k腎L < eaOUԊxR=g͡!D:!33%#T-絆[-oyP ;LSOѺEք~.#n2J$e{YcHz_[Jt8^\&&))`$A)*5; kIK**iyC+R5Фn5⒜;7FLY7 -p&lٷ" G(>3"|#K -X"=.y/'@70d.q&ۭ8*8.i)EGYp7-pQ=d!Y`'tW}>9JX]_pDW|kؤ p_q[;~iopv&Ni&x@̪ڎu/ 6%Q@/ze<|^sQ8tT_ceCdj _v?^P,eҠz-T좎Ն n8 h@|pxRSwbRK{mMD]r3hl]LO8z&X@8(W*,)qC]gQV6/s:\U&ϰ4l]u nޢ׽>tnuOyU*{5KN(Ic Zݬ t!~}/>~™L~0pd׷zk߃%7}yyB42*_k( 5 679gV\Kg,/s v? -*Ts*Kb +-s2SIA[Js %&CKܤˁB+ b^TjpMR^f5+tiX #6gZ\ӫrw9)eu Z!?U؊DzJRP]:tUD$d@UQܔ*uDvB+&UpVhe\V*Z0@X- aSbd%*J*+J𽔉\i;%}i #lBHm5у&ƶ0蕖uXYI"B pOr-K{RP.Ri+APtmk~x^vFoS3ۿ(T\(!i+xnʹ*%mc,+PO[R!V@gbZ)lU>Ijk1?mAQIaʧ~G即i^EN_z s;۬fhyū)_ זg?V#Wya_nM&sV.h*-T7e% FW!tUcWb~aU.JdX@ZZh+ 7PKU!2O*cBV\]~iulӽ;UȷnS)>w=M C9Ԫfǻb]L:خ<޳3:%Eyi; X\g肚p1ɍQA*hA+ eU(#jӵ8^b&xCLDJh0`k" Jψb ;RN<&ھMHHi\7&n *t &sLBY;O.Jr*-!>7E\|&Axxt!Α-GădWoT–5qLӒeEB ̵ZȞa\()iFNK-d4vSx$8@&iv{D0@Qt=X\HDpO>v2ȓZjދء v,{'|`:ɽޖ:E?wr)-ޚkӟ]᱋ْiه}3 =0_y@cr1MWސ~ %QgcWh!B7TdVZf$HKѽvBXc|pȐ _3=pbws!g# "G3Cºw?jze^s@`Y=.B/uj/cGdz'c>GNT@M08}WPJqgOVs X 1kTRK{oDM?|G2[5řM7n|7zq&3 ln8k(V@Kb* )P,s27Po}~XqЯ2g]q6A;&+ͤ޺kB@`b"scDD!w8@Ddr}ᓬx TL2p{ҸHXpR`IlQ 3UTs ڵ\BK)̵0Xm~fpZ+p];[P(+A(G7HJɹh&]N>BS#4[qF $Y@05}+w=?G"k86 PڏTBi`b&!3{OO 9 7HcӐi6c,TAD٪Y3Ʋ -16zڎhGmoo Wxwdtoy--p9pdb<\ m*E3TTgyyƴQyaYc(6)BԸIO[N ?-uٽٻF v~g Ёg`p} Ζ}g˷0i>Bv$@)') J3DIu)?@oԗEFkeEьi.RHhjMKdtqDӌ7_sYDzQYϩ3Jq|B'*s(ҚH'fQK<;jvYH[ >)yEY-%RB(!j%+ssYQ[e㡺.h-2좻68#47XiB Bhaх0\ %O T*KFtMwt.=r h[D3["DpY\n W5'm&M63I4 ]ڭVf._,<~V .0 捌T;H&EU db\@dJdw dXWb: :PF!1k骭sӫ@lm:%FV1:HΪ EiU.̡@bTZ+8Xnp/6BO,blֲ&'%rW(Tmt`G/n >$jA; Z~4Tֵh:\U&)4OB)M+nQ*]KRܵZTU EY0jndy)C& 5۷R|<|wJTNPQA7oX\+DU8\onW[I TRI(tZ>JY室b¥ cbZ+H+5nJJ_qA݌1 `b'$x2.7x|vv&`ްZ~7_nS"pIܧhkG% k̍4=?UgOgLr&k.n\Ӝ~yx=+0k4]xvx1z -TNiM{C^}' &ZvD=(zF:6l{#t!G}y(oB4z9 Wf3RxT))mDŽI]$r7#-þڻrx0$ՔLME.܋V HN<ݸNܷah<2#m,FJrR4.n ={A6zvo!nBƍ6z3&lU In1`o(7LigG£kF^>p@4U lCd ĵYNP'hHS1e=%5"t Չ>69h[ |AJ 3u, 9 W^%/AP,-sҗKA[Js %&&cgU,ky=`-+ƾ_ ̩kmZ[@C՝[A06LyYR&[" pf)sbxk9ꐏ/"i&&k1{&ã{tqʺL}mb3}gB~n].?}Y`+Ԧұ\q,s9^]5& k\jx;29:OR5*gc^z)EVi\'zjoV-svb?a|*+VwB{_NlKʞ-}NFjo%: -sx1N')= ,Xd`+yՔ)]GrFv4M6ڂ=s?;BZ--& ϩ". ?/ş€gҶj4trɢяJ |qHnwCiDi2`m_?rd>-ΗKg%qyyafo>|].9=lmg a٦ AJE"W>5H*"ʣ]+f!2{FzK)y+lpG6" ToK%1`K6"ʑ:b΀.N$!}{ <. B +8Z ,+۟q_06zHX )GL0U4 D?nge#oMR }[DrX 8֢,^a/ Um-,A2 )an)XFZ U<@JP3N::5tD K&8g "ܣLA W, xuFa8*~0n"\j$#I-qI!8xo<8Sm ~:)_Fj|1sܱ{Fg$ :7adO'+_-%̦r_ F8n\o1G{ fI~f=aZPdf~X 7Z6U`unJ>xNgTnGHЅDUm{ MnMX 7mJbz&mIsǫ[sgCn uL?v+W`T\KB'˶/+J&nm> ~Kf2]œ8PF!3ѕ2x1QW~Qv_JyY(IsL Y6ʦD?ɥ{|8O>s3u.x~%5ARUn2SX! L|{צPBXx?[r֠ 4?w+r y"(bX0 }Z^/zUW^^{U;~I%sHSoDuB9U+),:${ A}0s,Ń\H?"y.ƭwut6Lb$2M?M ~F(gvp")[8t9ォf5:T+EyhKA eʐL8%Y<ͨkXڷRH"ֈ lq0;jB8PkQ`^b| 1iXi³:hI9=dg5jˏ=? MRjgş[x28衿mpKH"P+pZF < * ,JT(pI B7WMk-X\?uC" mx&΄lCbt[jBg2qT Zd3p{L;W-s%q2R, ,LdAe?)\0ov~1DdXN ?=dd: ܘyϑxwGw+F8T==[,pJVRɘ7'|%;n ~a%A9FQNE 1ZLW1d!U !Sc #83.cgJ PrD3iUŠ\$Ös>Ǩ7C$ڱ8el:u)sʈ!9vj $mE"dpPL7isUo:XثJCr&Dۻ ;냱 KP*AZ %DzoQ%bk3taU ;wMRk)u^J3CYɠ]P$5bA \@X6a_H9%3WcwRaITY@nmB(ﱦ6X0.R!&l2w#揉(Tk\?F`ɟ2 le$K%ܲwn§XQpdfDپ9e<Ŏ%|>C8:~iDpuI%|=mD)RJR,) '-;vp92u+@&tF9B>BNQ{.2Ә_0+݅Z|>yJ;OI<-oiЧ*jC;>#4s>S%TH٧>=ٶߺsENILK)kû֖Z'ToߵE( ZńyF뭂lqukqM,O'gJJ槓3nogo20ۛ4z`5/Ō!Aa דZh&=D{6=:槌G^m0ƈ=E?0p~Vo> " K-df'τi/R̵8d ]8x0W^w4}> q!Mh:To'7vtz' 1}~[AVURX~U1U!azUEOTwl9 a ?t< ۛ4휇NR X?7`g'+V$+NMv+b5ukܮ LJrPae-$pC-\sz30NLK`ߥܥK|}W sqg$ҩ?$}[: grǐN{CH>!# g>Џinc@֭);R2_ŗg0g%]\˟lbz(N>|K&AϮ\hQID;b8@Nm+PK'JXv},sRAX:A.aôJ҅s/JOW!(qZ:e8sk4!R32& (}IB]XHmpjapj+ (sYx O(O@Ź{MS!wUt$nRNXm (F\5\$#vZ%s'dMo/.ZݖXP=<^Jv-rjniExgP$r'uՅ Q +E</DJBn~a0myƊDtPX=LC REa;&UF{EC'j(¬0tވwۍ(0婣Za=M#-!Іע(g;tRQ~%H\Kzݠz9۽ROML [ qzsr-e70P~ <1ԞM3>C 6. Wـ]RCX1abn ~s:\dFX!#DM'j_|6f_ތFų4Aiq?.OD4u:mH(,+) ,#IKnLj!vћn [Cz ۭnF%(bj0{ޙ=hpNƥȑ]rdJ]ȾHN : 4* . T ?LL`i$p)4H'8H })׻mfl` ppxWFX96gcĥh,kXH+1g}c `j3FB$:d>4%p8"v{[/QVDE,Pǽ쥍{B8sі̗kYw9O\'-O^b@ȅ(C:#%R2 ԀGˍɐ0 T<Ybs2J<\fMgj 9cfEv?v%¶Ff7>ECw⡱F;֍<浡L{tn\Qb[n}̐o|=zfk+sfãq4zTQN{fajW5jc&U^=0u1bKX&1u?d6YM&뮮`^yb&e|qۦT c%T(gXy2X!DE y_MߍPlGOBozr6)z)tS~eg~(,A }gB )뙾 X(Kr'$.,ѻmuz,xg1U9YH )2#QwA_y"N^n,~P_PApa7Q^b{y(ZթQh!V{BiVH-4@\(NGOߖrF aZ4(Rj,n.DžKCm 9 rQ⎐3EΣCv[/z b|c OEgx8>~_1 In>8ɏ27F Dt kq4/3BaE3`,M#=xOJO"x4\/q !9 |Y;+82e`fXb`$hf>>-9 )Z\dL4cI4f,јh)̀,SlB b8;BB#ޖuæ$̦澬ۗՂ9dx*rP6"dž{r(a 6A> ksA[N^_¼W3*]Fa<j'N6<_+Q~ҋٲNg|0d4?^_i_~ʲ #1Rfʎcƣ/_fdL*Wܚ8#՝楙ٍdS;&UP+}'~X*H7Dnlj7A BdXI@dYV4-|#s)rieɈ3J)h^ y$`EDgUPR,(W,ר ʂ 8PM8LLLX4:PS쌰*1)΄q)fZRHF*Ft.y3LmPNYVB,zma0!3=Z v]v.eNħS$3vadž8\u쒄S+B[9|x2n~ 9!@1䂫1[);\(BdT2i,oT[ 47sBm;( Nf]B)!wm岬l5=uj|(||Hs=1i8\Et7HeXBsZ&qL8ډlW ..sXS)pE!YI<@F$#4haWX=̵(s)Ŋ`9[b2i$<DK KEWkk kdW u]`F)2k%_KjR)ndO6G~:?qalmy #_p>f=;͔)4Gu4+Gyr!N3 3쿄]5`)"`v/hhM0%Zit!gpC[_z|m}5|eGU>a}4o7:} }nT#}eZQ r1Cڿvަ~@77!ăwe@+0zwңPl+/>jrquLf(,t\}X53j?K7XM P!3w^$7wݗ{򨱥4eha#WxH nZnzm?rӽvϏ~f'IL-pKUu;WߎT C*TLpVөsS(D}W>F)hiIj*"WkUf,>u>ҳF9|du懭t:j~ j;~> >6}>mOܯ$ڢiWzhx4 OIS$ l6hS;fp7~͇1&NEi"ki?*:?"QJpff^k@ r6pڭ{#{^ ؙᅈ (T\Ym1R.%ff%/r2;7VssKFgNq gӶcwAOB~Vְ ]8Yͯ-&ocd~Kq0|߹D-Kvéc6AOQS41<apMa; ġ.op9Q!*SeFEOyFG3XFfL XwxP.Cf L׭GxPɱCiu*ejSsE3NSbڵR PVʒpJ:^jb<5ݔcf U]Z*Û؀έ%p}pNΗٕy-g#KWW8[wc`ˉ*^t%^E#b9}S7RW>K;cvϺ6v` +En>ӿvrƫ^mճJvw^:kz! Sd'ix=t8qM89'tMGtR.sM!=~N1t*˞0_і"ژ'(M٪&B-!W}߆MR3fC?}Uwۂ1,n;3*asik-L m8d}vXn w͏?7n&],n>A%*y ҅<.z:_󇯮 zXz<4v8ݍrf.Nta$ Bjw_M}z_/S Wvλ꼇YFEV7E#DTQWGTrLgqdH`1Os$(*+s&/?&ZLsh Ǚ-u]*%X 8SZQ֘P7pS~;ʟ%$J8($E"Oyi,o[q&=z4p<cc:3P AC#C9[@'7v `r@" @y]:[,Sq>hp;rizlH/HNHh_K3@&Ax^/oN&9#bocmC.{ۨSp$Կc,F"|׀eQ.[> nMA.H&a+Pl:ۻ5rlPi_vAi Kǥ:LN}Ymc,dL.EX+b闓'(e/K}sr˿#["B%;9=74#tpYïJhKBYho?x <Œq3/x{"18ƉEȆйPc¥SDlEZ  *)Oh4&jL.,Ŋ[e奲k'ŇOȕ&)^~O 9A8W%֊H̡ClNڷtE!`~BD #*XNHV(ٻFnlW fds9 ) id}"Jlٱd %.ke*4n*C!aakVՒI4)Tr\tXG긽oQ꽩Rzh:ˆK 2AIw<>zIیL\yPUMB!+UۃD2,%5gcםRZ{M]xĈvX~!6:{opKRVpvlнy[7n0eFm+u[wf10mFmh/PsI] ߲Q(WRZX.)5Q,ᜎ"rwK,ՄqScsH)zv\^mI-|I-vꊄ(YWoS{>ʬH):[jhԄqSk[bb_1۝9av:M.&Pm" L;UBg![I]ˤn{r.]껔Q1Կ$>{B{ʡPZ_͛zmd{A@z=Qܨ5^4g5[˼^[RvbڃCqy,eH@!useWbrkHUIH){*^2hrv.3ЂuŪɾNÓ0up&_AB5*8" 8MZT2[E=uW2*P*߹;j}1ލZ ցB,Q^J^P7 %1 Ї8{?\"uѪO56|QL03VTTjF$̊2{9-?2e#e88F2Z LvMTg=A첬qKg?GBIuAr8ARsնA{6vW.9*( qeF=A C;R*'BqP^7HEՐ.=2J:s6?2폌jnS)c{AmH^}5mwxg~ppvp+u/Ofl]SKE_>(imzb#X)+kv-&EkD[>e (czXmD8j{9lEaKvzW .TN@^>~QwF3VsQ?3Dswo'Php'lluik@C[vUՔO]x JZR_I.fJ'f7ScDځf>!_5o&NBwz;{ 17) Bd90ga> 2 ӦKi0meC)VOG)T&8ĮE1\Nk.{>ɏb4 43=ͤqJaGj+q+8 bÿ+CFա43 8&}s=V22XG;B֜s$h˕b>6.LFXԬM󳩇'ˮeУBƠOHbfKNAf͙L>5`t1&ok=?IJtLғm'{d [Or)D[N.'Ӹ#tނ;ctV։֖ɜȈlyEv8٬%gqgFjQO>h?;G1LJi oB-{ʼnl0D=rL;ܐ(!D>&ԁvvӪP,c~|\?8?^/wzN6h22`Q;lV Bǡ?ȳkZxvk2#: q8XQϴT3éYDJ k.^ ѫy#*c`4(vHn@DA+q% Vd+ p ) mT248i3 V 9.J@ l1_%Έ%ۣcC4rg^0kn]̆4^+>%Dcލ[y"t,L}WI!‚p38`ma H 1i߽V .ƎkDg6BdDebF|AЈ ,> sZ5$XA{*B>_@jcy8~3?g]d󳃒'4oOTfz{a4AGߏoE_["v(.*?|0,ق77asq9 QgE=}X^Rs$n- ˷@RLA7}q8H?e۟;cqW)<]\IYsUyRrƔo21b>i@919B'@ճ2/O0wvU>mM,ظ221-udf+}.ϑ4D0o*$&*A{0X) eXQb4 !K*#-@A>7j1bJoZ:<\19}ka$`oK22Ra7vZ CY5Hv*Vg= oNcB{v'qWȐ?L%7㢆KJI>*F-.=~):u*;<ObhKOG6R]M$g#RiW.Go-]`7!OdςY>;12zK1"Y8'c㡵's+vmW,A:1Q ؁{[xՓp7iUOnrfwi -vl`^O r⑻HSn{…{r1b6~ܶuyuuHra451&m<9D@^y g$g܇5 /yMxuGP7#ײi/}.wu.s q@JJ޽D+ϾwVv k@Ǚ{ݽs6 ]x5ld*61҉/m62`/ُ1h驂enrYg%( g͛o%mqd-=IH=aӴ6=dQoNZv|3G^̻9[i{V8Kx =ssm&]owغzv~s-pViyk˭7y"0mnl[[Aptp@*שdxJ9xB@Жɬۿ-پtSL:UIdюYe1Ӡ=}`Lm{ɂѱH\kz7bjuK_">r˓/_3[ s ?Yy4䍫h-* >Bޣnu1HQ}h(& ڞjhքqMש^`_:_E\|R`!.A c ;LO EB`Z˹ϐk;nGrÁJ@MB.TRNj~8ՙY#xh8ƥ7I?ߌI狾W0K\! kxzsΦWa>f>yIk\_<{^5 ??;[a)Z PN )y_E RJsYn[gexlj67.s(LƓm0 kK zS6r*|bԅlǩp8F-ef_H^m&Ah6ׁ2yLЭ%뮻~gI<B<2MYĆIk5Xgd(+D JJXjG1$2Kegu`sz('Adg_~}ٗ4<( <M i蓯hN4#qċ:;GKNgۊ4սY Xw7NB+^i-ۺTmg9#d5r )ẺLS HKU;jBu@zYq/AUY r?1̋1}H7WwY"J/yӞ/y7U|̇=['݅W_ޒWi[EUR0 aTR]\A=RuC$DCsTA߸p!#ORcJc8 y(}EKFwGWx LT*oFw,57I &i%E,{}`Up>ڹPKbxoӃV htl&'av0%MR缺øDbY腣LY pBx˼d8(O0CLQiK{ B P)Evl,5'T.WIMBwLy`j8̑pzN2$RZ%"@]%ĤJN^1C1?J)0ᫎvMAuqYN;x X谲 )|WK`"|1z;^fTCOy]6zFbXGFLxts$,{myqE]i97b.Ÿ\2qv8~fa/1g(/PuHbx^Ƚ&aC$M/_Fw~d?/, _(-ÂOz9\P\xvADP< [}by=AF A6?{VnJ/.V2" !q`7AI^hm[[7Ur[`{sE3>y#~\W2ƀf ^@H@r],v v~#ESv?.vSە5>T׵R tǒB߻dߦuxOvu:6Ox1~|2p[,f?eݞD}(YzUl  ;b0_u?<|O1*ߨө,''dSLķ,uEm 4X _IMuHIbIK'yiZX,?*l]o>-f~'Ňz@O";(a\Y_o-vTL`FY?7#sWCR̙K/8fN_r}TS>)&z&̄j"-rm&5[]4 ^{~d|3/PbDCbphijD\& R eN 6z .h(6iӫ܉j_peQ.ʉhz &z䍄Ժdl .LvADJEXM*h+NJD]ntÁ C2Jxr 柂KFs¬ lm#9y?[~ɦ UٴꮚI榌͞)G0ӳou{\gq]uvkbf!H"1CɅZ#&X$È.drN%VfE컾>~kur񑄯ѡ;/&t Q(Z0 ɥ Ɣ9LA`IHc̱&aOe?_VX?& I!P뵜Xw&S6!|EH;&MJ1pU(r2U4a+I*pPIktPOvTPϼ8Q#/ֳk^WupkUZkvx buÙ!~$, ŁJ' QpYj%|ʵ A(CAJ! 6c_{Ms+'ޅ<1ϔ@>W1)zݾ=v֦Dq|Qckq ¨Q|.G3QT֮/G]+n6h|֣ @$T?'g/iUԫG{Se"d Eq5KAAqI2/*`.U0ҾQC}} Mϳ<_eoغ` ֻ鼼Nc-;X@2aDS?ђ[CT 5Ir&*GU5+<*ɁsޡmU&5Z'~ݿՌe)^;om J)nCp*R|>VGLLX'leRkq8JH)G<*-*`l19xݹ)V&bSL3g9B5Ϟ{ksԫv>'H=?"[T{%wZzC\7}ZM5BTkF77֧5:O>1LC 0~":Sߧ]y(W ƻ78N3Z]2gS2uo4aP p{_QʁBQ~[>;㹡JvùufO7SoǍݣǮ08ʹfƹMkoL[-R~#7 $mxmVhv>: @gr3,`-tj놳p⌸hɮOIHnJVfvR&b+kw%rGtahc^cNAµ' %Lds&F*jŢ;:iv=P#<cYT<_""2Vp·ĠZc!KFV΄2@ϖ 2ӊa܌Dy[Fςtx/~ >Žk50p%ˊO<#e%/QCwX lV^'| xQ''G3:E蘮}KBre:Fa-`Z^F7NĤN >)&-Z\;)SGΥNy*u(p8,[Lcb鮥603_ EJfj;&c MnGbB]ʘ)y2V8<-( " FLxy%W(ֈbc)+zhqbL݁Wz y(`ɴܾӎ5S`[n)#/% dO*C2+:)hi-]eHMY^WLt\j;x1Q{%?i/ 69E.-&ZW1QD)vRΨKx^h] &!sI9N#2u&} i.*GkKG 5PȘPֻ}pG|MԩM&(<.b&6 Q=MbI+_{wr"U=TN4yG#مfh$T\R-,k 0#Z1^Z' :he&6Vl"C&)h&W,9'kg%̼Ja-(}wJ-RYt'k +^O{Jr-\@8ӆ`Fx΁09o /{T ]ů6QsW]'rËFځk.q DƦcSNU!=.>n $k~;eĎK PmL!e4I. ="b"!Y'b;kJ)),{J%)4$WFUVJi zeIGIae).b"QuQ*EWbZp+f\J`M_F+Ѩʹ(x> f/+h8ZQ\fP<͙ޖq,wfyjʬfJH+=J 2+3k9bnM߱v;CCwZpKnAs Q|Uސ(n'QZ*QMTL~yk1j{vc1 3]WE}n)H+׳یFfwe@=@|v~e79[gpj} !PC_0(@X9)6⧻,#m u[9P/D#L, 9nETik<& y#/*PZ} {;G JdևP7jeޯK(d4~ 4i^gӼΦy]7GTrzNF&AXPڦPƖYNHuB8,Xw +W_# ` kiE^?-ɇ<yV>pwWӪ5WrqWr.7/ۉם@[Z떜C]vk" oՒs%b4\cK8a iJ0-1W?a0hL%GC?=:xfog8FGqWg${uijBT9ťrF<Un!Lj)Drkj#Stwh[7pUAftrC wl?<KRԝK62#7bO?9qYldaKJuەlR Dxrᶢݜ_m *Ux{Z堅BW}݊?EE?W7iq:o^ux VoM\~E!by6^5S,hcںlø=~]VzCJADBsdSs]VлuAt}Gv+cZ#ڻuD+n}Xwn]l3qw3J'n]uPb:]Ļr!/v^ܸ7)(lQpiI])i_q媘 *2 [ A'+RZI w'D5N8FӮKm#<'"۪x,% %Q\59*%e(Dt^ʹ*tZ?=^_t-VG}տ5W&a[~x/|_o^:]޼{gpObpP;9,VJYkwC^I;(_֛k 8!7#I/;;jyWX,b߼hZ6%od5)ݭ)5`Y,VEP\ kr)e/aѝ{gf (!A@%OC֑{: Prdfb[Y˛`PS2l0 ih:52nD`tWyUmHf_fP9ʫ`LQcMa%?ͯg釁{3).D0&;GHסm^:V^=;4k+ﺮytN %yRYi/SF@+] x΋$U eyQ؛ tL{b7?YQjqrJ3څe^?ɍ*kj-)+|22f2 w]9g: Mc`)H~c Uz;:98vӣ">=GwSWY(ep5>\M!R#OٚX)&.n W4$:$-aSʤ.t? 2p4yNV+Sɵ1ZA ڼޠ1UT|aA_h4*JR'}Nfp Ȋ(ѣ@t1h#l%fٻ1f vn YƬ6f%/hTtQ#YשA,PZ>JVJY.LO-J/Qe|- 4g֌"_>ʕ{̻饂fgIIڪRU]kGY$%t0{䣜^TF=QjP*1@tmvkEU 4Q4g@"؇rܼRT6.-TiWO R4`;q6uǘY>&ϡ1he(pr&GY·W(fG-E._Gzz(nc>^@##(,|SX(:5ammG84 %G;|Pk8t N-4mCz{|,㤶ǻe+5r5pVGYO^\L|s[ęcMK͹G,57FvFcmPE5KODm6:elCqSq{3&n#mP:#+,"LEѥhm]tO}"q0AFvU!WWF)<w5>%$Hny^62J܂VB n1ro4&9@JJ 0##ƒ[i b|6BjJmNz 신<0Ò ֠w҆hx꒗KRJ͂qZ#ŘJEVL }&"g u5B]-i)[JJsOʁM9[rGK" )QPh3#Yx~Z 2s)X1ro|SҊt֙dV #4=;BjkJ!jtB-iw,qhlw6L#4O(rz 'abp:"IO?Kš$-sD KEI:qz4*4@][fv}Kvʅ$x9 ?5ёj׍4ןٵ8}[;Ϟ|"?/>R"~.YćtvNďo{?;^ͪ]݇۳Ou rARޥaYvnnrooߏ0vx2zvӒ}h '.M I)MKq‘kֺR/7AU~GȜXBQVs5Q+P4g P|iѮQ3b˭ HJEw!k)1(2O|IL@-hVm}M 4./MWML73\ܺkGUhf4c)3{^R&f5~҅^ xeR2Zɾfwˎk^);{|츷g^.76͔cb]`.)-yQn/D񳾯/2ozz:ꝿPT]=UW=z4*WvRyuO9J GI{Gu\x؍5DͳyZh,Ym7ee1O-|\ؚv٤оr?ۖ"-˕۫9ڽV{cVך/42OL M5-BITՒ}:x+[-{ߖfc`?k6WL1g:>E5o{FoJeG~ "Mlb2/((F 0kizۑn ~gap0{$ /J}vB!F8քZ ʘ8W-hdVId:%Ęcd6$ e^ 9r3:pۂŢPQfYҿ,6MQj7F92j<f|7VH+*^]v0Bmzsza6- |lSR% pJ8j09=s192%:W3l>O^D c՜p ~jv4\O``:]ަi{@iM |Nܺ4-΍"EZ^rS T ¬Ʒk(c!ޖt&3uŖ~Wxg~ّGO5~?5QRr 7VQR~eok!pʆI~.pFYTdst$R@x`3\$|,G cA8-qq& 5 l=<`ؒyd kMЇ]R]qJ+^RFˏKyi+1K_*RoQx.߮K𓗾b/U߱3܂'i+ٞ@~[ߐN7D~"*$9%3{c}oץXc:/lL3.'nbn14Q#oŦAs#m挖Z2ݒ R trPLtCj|xy_L=$6 %\9^}-pp^@䨧pF)2or ^IE-."dɕ</Co]53Gܵh+rg{"P dU]@oUpm't釨V=AdmJV~8dN3uv.hN7`iXơ'LZ4ߘ"JFjܽjo񕮋rst `, Vѻ]j((*x~}6e޾PAI9{MwQ>0Zנ1r:6WX13HGh&w)_-Z4M?.I`#_Њ_,\6Oy{R"Epyߤ돣UϚWm]Hzy/S ]|,%'zX;]qkg$g4kT~279t)!~op#{Y2@AhEXU/`C1R>>yö^+/,I(oy³`Rθ/׼aSۚ DS ZFKUq>Z6iszákvkx ,a"DZFP9c?9lPz@Hi?Zt Yb S,S&z̨3"ҽVΣH{#&Zq1IGgRWdi8YCfLfrp6ldJZ1;s݇v3)+9#LUH"HM v.mرh5fCf5{.K YlRcA,Vn^7υD_6%(]U ;{ٙ$.."}P&u,p]Q % x7@&ӵ:ŝN)+8-s)W77? ZyKIkx?7yy?ֱSVPk :N$DU c/Rj л_YvzMW]1 P7ojbc=j4h?/T3#J<;, -· f~CPTYbx!xaG;&J]т:p`5ȇ8(muDٶCxmQu !\% eAEYWOElj*dj C۷Cx 7 ZS1eEm{B[ tB3ȑI9gWd&8Cz[(!Hl r4b83YAs6 Qzv>47~lBwj~xݯS% zt4@4 kDS #HdMco@s9 x^{\܁xSU`=@̹﫨$FÀ^N&OxmM ,p+"YaIj*0M>9٫8.l#s}A;}>.|DžG1[ZBmc{R;9ҍU 㱵֧WDtwfdֶ df04fQ54;imR7ٺd#ZEr=[ENQ< Hæt%\;g7wn9d{mePR! !r~A*ZF8Dx?w v ~Wѯ̟q"ܶM7_Ukj;277|qSf̉_98'~/snl>Đ0lxKKeW#2:邬M}Q޽VCzqOz5^o2y $сdۧw|Ea2s BIt3@oj{{.u60v?n}vNtUrX}.S Pjʕl bdTؙU` nnQk d/9~ްc- >kkOu "5"HBk'pF3?4Bvp~46evA7WAWD%X, Z{9̦ɹ_d<y֧r 'dfJmií[(YS,%a'" Fs(Y3Ҕ)ɺGPNg v̻UOŧMLz$Lj]wRISŔCi^޸4łEY%Ήʏw5N&|&n-v'k-[23sX/CNrX/Bdr~p=^$v g&_MkԊzVҋ|V$2G906Y L.\bi;>F;-@|w+lˮk>jҗ襼>tW}` ʁSU ,e@)Z6jY|#~0,`lpcX#+:q1zzI٭f9>i.G3W8i3vy)=Ι2Y˂$V8u4BOc^ۭ8kKXnno-Mۭ}>տK@˂"\~wg^ݞ`6>vzH=s2ZXAlM!9*D%Q&ZSDm{S5T.C[MS s"Q*舦jȡ5i E?G`%jIGhnot8mfd[Kh2{$]7(%XVm|[Z3V[|3}pL{)htƱF xgWz }^-R5zX@XaHR4F2|>.M靣%@W̴Ȑ;q .ߡsPոz_o5Vz.~sPcL,Y@XyCgjrIVt.4(}'+WsIU.!Jr5N+NҏhBw{ Qiy0lXvCM3>Q%a.q3%TbY⯂vY,·Cx=wM, Հ)˞_`jN 2BY?OF- 9EnؒƑ#" @fLd@>e @;䂻/9,}Veys35X`򶺋bΑ }왺B9o84x9x^EY#OwOS[L2:ՠxUف=.hT[gQ\x<FK&B 1~w:aPH@ 9PdE$нGF%)띷:h6BjiR;P2I"RIFwFΒ;P[KM\h}bHE*(46D;^wxM)cz}mzr$XAG!TO,OA( b߫I2:pN.OqbvRqEGHm'<{2{6OgR"s.mQ"(9b;0\? KmO+uQ-*nѡt]"Orem\gB/ ϸ(o8Q Ox fd{f \;+zˤv!.;+=JG bI 6Do/UY{OU:YVbęO'~fn֌!߳HjƳKc;&B @w!)nc'GH6'5B(nБ,ڃCˬM7S.CaTd*~+ շhh\fJd/O/W˾NGhPg+Oz~E{*&$jU izdDĚԲ%+UJGn8D%J736:w>?/e*Rjʊqm RF-$m4ČJpQ*SB@D<} J8k-2ycR5STvnX RKA^9tn>t4]Z,2qCzڱ_Y ܙͽc* :K)^% [aŦk_Ft5Ap NBP?o`XzJHDDBs (JZUS%*W\|AͱhfLƳGdrRu5Xr1:{;jIF 2 Zk'U"='7bEj^s[뎩tzzV cN)ǹ dƐ7J%vrFIfL)QL!jJ&P\ܬE%$v ,Sv'+*y\2VwGk4|@2b 3Tp jJ~v4]?ݥd߆*I6s9.ԄcsD L= %kxl磷؃<")>Naᙾ6kɌ>r]0!gp;s6 gUl`|^FWwF&d>F?g=dي1Q \ ?>-bɖЪg|tSq1! ,L tĖnkU-L},TN5|Hϼԑ2bT2ztcL{ fJLXrK{4X' Lgaf$W!P05֋'A΅h$L)&{ɮ XS$/pmSJV .2!A>󈣽]aeMįH1x@%~_R9ܛk5ZP3꒨)Y8f_\}FQӋhO G`d3)630>/@8FpV`!Gq!BN j*bBNZF.96 Y.!'O)DI`N ѥJ}`k, +b C30x};,SB}gɪSq!3|{T<Β92S0(Q4Lt.Z!,(ѽj6^-\hީR*|Ӎ3}*|ϕ~ٽ9&50ec5 %T> F;[|,&#3SL˫LvCܮh& [D0@* Gϕf݋v}\! sre+ ZB&{>ա/|8QrsbͶHҌlm+W7.ǿJg M9wʳ:swuͮsݓ0$=cS|2YNI"R0]T_y]8+F|l>z7,?W /.W]W^) q>ﺏV/+̇^)bcE93kGJ y` JMA]t>Ty8WVg{g{M@)ÌTk627[%[⧗e :KTPTsVRZ0 .1ϳ+ѭZMaObFhf+#k-s*ySoYĒ?7撜r^RSZ6XaOsX<]?k,+̀4 y>85341KaÌrߪ, a.Ƕf7 ONiq($6Öޜg‰Kf hܛvV >,YS&&sTfƠj  [##=H:]m<8cYe$I' l$s:=rڳN268[tHRW&*N0oBz;a8Mhe P^b00Fc'z_Y6Z26FImf/R nU)B/6}Rj*} A-r&lD1n f=/L/a,L{kB0DcR_JQKًn3+3I!AsZ1H^Լe"Jsk& I\?Oo" ;}cpۇ{erꀜˉrLmwo~won޻7~& okL&s&-0;ߒw\]w,yqg\uj&u[6|{ K9*t43fAYMw+[;4_K_II=,uI4>#UT_' ه~l-;ߑ5HVsϿ֙Ŕ`vMl KDw\k;nN?[dԙwϣX|8\ZSb~;Ͻ񐭻qy T9|jKl>^S:QWD.ͽy,AN0<2j߫YHϕO_JiWr'~Ub dۅm[ Fջ<}ڂ.-ƀr(E y';_?3@`s>F#ܲQ?:#a8 &(hcaKc7t_bשuDv(B&E9_Bd YJ\LKߨx1?sGJ>{%K䕗+/Wu\@'A*&r.F[\h4+U(qHP% h>}ϝ`;q}л?S@ܪ|pތlٚe卭5dy3$x 2D>r:bksb /;nCtM[p]( G@3*RCbkR‚aqUA~1M}R =87A*a )J^[nY! ĐVتy(B Ơ/cR$R"jEr),Rh%*Dc(30 KP6eBR`9r#b&/2G-8K1%ΎQ'^"C%σQ4'߃%4B"$"l{ϡ7@64wsKk5'njPF<4hiX2-}R8lTzN{ 3"SiwGozkSWN pc8~NHtw"݌PK.ܻt!(:V@s) DH##(nlXo48([Qd!" "h"AJ@φ=N,Ά3z{ 4U>T x([C<*n+8z^ct7C HN! !ENmŠJHK9E3ry_I(ĈcơH0y(S8ijLS Q4K9 QbC=!CVGAhlFgo(i3~\ P0 m@bT_X\/rz>pͺw7VM7_{"V;E1=YXƅQ ' )L3, UHE8 3"tn4? d0?Bs2^w˴0 `(C ”VS ܆[hiLa9 ,WT *,-[ )l A8nkIzuW\l=!x},$"/CCr7c CwؘbD t ɀzJP|%;_ʝ(N6 #Istfx|{7>5GEtv}?ڶ.NSOk $xsS`j;<| 9%KWd;gT5.| jAx+JG_/Ί[hgHDm'cwM>V Q#g-ѧSFtgWOjT.klW2dVrL7*)j\e*ݒqM) Fok7[*!Fv;8un.[E4IjÆjqni7ktNi%b0&en Ib[rAPۭ y"ڐr?%;]UESl:){UKKneF_8]Ȑ ;>m@O׳Z}T“㌅kilR&&A|J)ˆ`{ukoTi.ч~#XLn쉆M|*Jy) HCAIpҩg:*ک̓V!a^kLfĔ7v3 s:[ARJ.j_?LcRDGZ!154۷̿Zo3-7  @ٞ63HC-;FN{G-zχU0~[A sa5^iE?xY6ˉ.!ä vgeВTB@gWg~[%g EDMTڢ"A5^;0Fe'Ϛ5{xti56c*u%W[+tNGY@$XLªO9um0@ uu*-J0q [V_VW[—9:^ꪞhVW$6+1b"LRrJLWb_wW3Spj)=V7X"A 1`q+ ҷ{7Q@:=g@/.V1aZCX;WW#{y':^3s #|@N`^si6Avq?>5{MIm>e]D $Ud:qJk:6fCh]$䍋hLcNoi7kΞPb":mh^S$nj6$䍋hLz!ڍA$L DtRۈn8NEʴ[Ɂڭ y"zL18ZK2 YbDGRN.tHZOD!v=[: N r&: E^gl<7K]=7` Fm ޙݍ%tk^7Ꝿ柣zUΤӟYvLN =5O#1 c@-DʎM~1 X Z[{F;kƢY>\h0o D\ BHG;k{XvsCް{%KsU6DfP?`Br~} ҰC:&]h)٨LJY[JrK cb0P +59V *PN&93,TӪؙX$ג=a0\ˬfΦ_}-o&3u qdT1dX Η#`>~DսE];}]Pd{xtЅ {|A9w_dد%p {*;OZ)-inOmkaȈCHD\?}t(;2 O~?ڦd^:X\% L `cw+o=ûmz݇0g.ӽ-\bEn 4A_N>י+bT!hrḒ'kE:OAH0[)w=~rW?ZN!UR?߫G .2=.:{(a)wٝbCA؜r?)<,K;Hx |"?>+2nALXB ~6r#bݞdnbqKlSde$Ov~dKzfwmkAۉ+y0[]PH-eTX4THvޕ >|"( __gAGMhA6p8@Yrg1dփW?s^R?LW#7#/Gݢag_[>M^L{n$ Il߻(<م6}v(T$}D$%¸?$Oh ;nytNyoxұq2EĨ~},/|ݗОcС?:3υ5W+[)CX5orD*:7IM t%uj0r'2\jtv`+;X$ ۆr9=0-{^ICOԓ1e 4&A ATdݼ %o#||EW~s%DV so7! ĂFjj-0.}KwTm՘* :(J FoJBPZU(vSRB0 Ȱ-kd#濆DkumڌQHmr䠛/z`iھX?}c7xJJ>o?Jp/pYU}r{|`TB, J?ƶ+C䗍ݑc?]iX3Hj Mx_)|ն qt [PRdzBBRH]\Z☍ )[*sT`O~yQ # DZnicP6O7V鋕,v tW|v|Ȅ O5DwI\X⯱G3ή~yp,ZVg@U⯥iApc;1B3. xC[DtytwoǴq>#\ $To:mN'$IC赯D<=M`wCR`-l}.tVe:_y; G).Hn͊:jGEXm@ J ^V*J5,"tvYqNEzb!D>_&Uv6Wӗi ~-8UĄ5}dr"ds =%o-`'%w i/ز+;N8-C,{SKfr즩bSa MG.xHc;#QbH[GOW +V^{ {rNCB@V]M"#-+FtQHIq)NGPLQ0xGDDtƯ ;Y ;ۚ܆0AKIEXUbn$XιUqKm&V3i8b.;@aE{pJdw4ڔ6ZdhdQ?-$B̖Rp\ ULko7f[sۍH sE+4(1E\sZ_ԃpbpF[+4aMp\= ;_`kp{X%Fx%O((v9XIp\{r6նc(hZ$V;O:]8Ge}R”D 90fp DCsqˊFays0tk+s$8RGYj;U;D.E@ bT_QunX2n;ZL'/cӗ.ǐgN5TX Igtw_3i>?$N1 i\-b2׊Py Mh$W=AF ⶞wrL^/ٶO+>@3shgfq{S5uS~~bCc7Rmh>Dc4>NZ )Pu_/{Gq-}I@ԯ{{hkӌXqzF$0?Xޚ;Ϸh+f'WRe;qs7=1E1E 7[ti;8pi ɆLalg kYI5߳o.ofRg6 oc5A^:X3+ggEdHW_ܐox8$zݑײ(@\O)ƚx#t~*C[,n.s~_I:\U?LyjK ~> A8'~סq{կhLT RuՀ zhN8U~[3])>U !ZK]4˧H|'n6qwDξ`EDֆ|p)*?Cލn!D\ĸN;R W=څ宣[~{dtkC>6)l_[*NL0_?W?̓d 15jWo.E`ƹڼ*,Σ66RFQ1V_YyHSԫQU4-+dWw|(?HNgL{ia-&_\H\g>f p/Gry__]ac~8Wv+3xD3gVO?L W>,Cwt_{&Ȳ̻LCpS}l5k<&"~9iR$|t-o-u\M QX<X2rvTO42?߶H,hiFV~O&WOh\oI(C% ~*FUu/ׇq $ BQipPKQ Y2E ˩+FS1`4~)6I܍7bfE6}˯SӐE^Mfd{pd;Nj;-,d$r(縝DaNQ kE^DYXNI~ʲh-77iW9.(B)U&DV.Ji(MAp$}IpDDiaIEJƈ !enF `L0nҥSpE!*èĘYFbSr>B#qFedw.肮-V{|-LׅdIhO٥[a|?{U_{l/U/O*1D*쁤V]TʁUM(NJ{`|0<&gR?(mkuT{PyRz8ˏUORIsw)YG#c #|:J dĬ傍 a3(xo!3Ƽ8o_{%>˷ a 0Gs C #v}պR?LWo36mq<)TA(`ЈpE V)0r͑1 KR1Yrʟf`lkn{{>ϙ>!r 5c,׸(|@BY! YN kXmtMp~ذι !s/?ހթI"&A-o@2S8*0ΤRPf=Ǿ1H Fvڃ(;'̻Q[i.EMxs~}Iq5iOԁ3z8@x-Pj"5`cB|SQE@EWq>+ O?7Pg͇)Γ.|O6ahq{d[j9&J˽t~T=>ow~fza t3SL峅їMv=+yh3dZW6nfT[,elUa9qt]z߱.MZidVCPpŝТun{(N%qIʾ:e´.>N/Aىr߰q/A#qK@rqi*b|ݪ TKE*ꃐ如S-#\@3`@1خ 9+/ aİS>Uc! B=( ctDVe]jɪÑh0_ns8N{jNƽ>1'։^,=+#8\WRq[!2pu9EzԽ(3U)' 5katKLUP$]ry 7#i2$w9N4Ga7H9~C/Y $4˷uԕ1w&hV;S2 ~xӏqd.}Moơ(VsoLQ Q154쁾ԒT`L@l v>ztI{3@](0L +k-ܬz{JMxfIzj={֏7Hӽeǂ#HQ}iNpMHUu $ )hDG8DTS`&iyV'Qox&<7'Cs,I Ρݰ$t J 6ld55RNf"1쌘Bd&c Qf+vWق*ͯC aQbj a$"\FzHb)")婥>,UBX?9(>$ZeP߃(!*"Od .קl&S8$Ճ>%P!0PI(%Bm O/ϼSh(1 fc&~һٓ{ZM4RYѧzMgE1. .0o0(XK<ͽiQNͽLqfm݌GT? (ӯi`9Tk!9'G_mC_[P-`3f[h5evdYlz3~ت\z?x+7ʛo F8mzm\sop1D H, P2zĿ5nBAwdvЛ &16Hy>T!zuSɩ?nMH[4269ƂXbA'_`𕔬] ޑ3KR-5WT2bh=1xDqGJܹNr,/h8-^|m]œ3.{ي5x/H@_( PNQ3)9+ipEJb_2D!Az iHBD,P+{Ͳ ߋ.SpMs#ї|nd7L":e+e1ۋԇ4)qJ}֡̎C~ycCc!61rXX1[vST3!ZG [㽡msH?6V+8J8WV8nG6F<: )̦o"eؼ؝ }^wņl,6v/)-:$}GJ}/&髨 A3}ugsO84-W{EF *DBI "*2@#*laCG 3ȬL+I][;l-5tpWkeNJˤK-0^ߚvJYf٥I}ZHA+j+J)r hk\VSA6޻#uoZIRdjjr DܖJ]zMqS#.]zvUܭֻmGu3{KSK!RFAvlU l{l9Z>^0딎/kO >m)%>:{VT]n#eWj|] I+"*uΐ$ŭ[X28ZiQ $僴 ݌"I2?P3η;9ba:@+ꐅ]3{=dCiva5wˁ͵Мe@0>H-OZց-JW~x}~C֑;m94GM~~MH;_^=>~NOgN#H3ҩd}-?Ot @ɟS@vFT?;t2m9a is/ڡX?NicL2gL@Ff*3spBؗG+pv& Y:携ɥ]ej[ BC)55?VIm_$Bߎ~Jt9I Vsǭ77j6]g>$W)O"U #*27LGӡ{6w>}Tڬ6&a+LJʳ8bJ p<k b%ïFYhĹc=;hW;S;.YS 5(Z/FKD)C'7PObb3*^ֳgxnՐ=G0ט쁚͗ίMjI DwN }O)m6o'j\K^.hȔå)ID'#95jݨ YҶQO98tu`WpC''zp => g{w3x6 nuH#Dz4liƐsoX\70Ǹ7c8C赫XAZpm`mq+B6EWH+pOa,Bp%6htX+uTKFJ3>a,Pv=,y)]x뛶0Dv3!Na=a3!;Naڬ"MsvGpWx+9 б @omKp2Dp;8%;&"`0f?'G<Ћ2p,d_?kk1'}7`ٍyIΛn@jUo&Cx0` C'rqBN8'BzQqI $BFx2$gK|s ooGIhvK26m(mw*O[6 kb@bD,gSNn%_sV "SG*2H JPiH@*0P"D}JJe~J;(U0DK\eo8깠1yB-}@R>OPCD9BwB1Co8#q@:tb2Sj ;W /|n+EJxg^Xiyv+j+J;%YZ3l^agDh+\]Z)vVK$|u[)vVJ,I}b+b+ѓHSV*,Tj"+RacY.LBa]zVj=$zL×EOԱ^jhe;Q;(]ZiyE\lRy vf.wgR礖!zn+Ja?.3RH-! [)R>Ŕ t_: ElG|O9EiD,*&WnHLV@H(\\FFTP3 \啖S0ber[[򨫧]+y(Ѓ}TTޯ{=>B:~nQO;^p;:? -R$fRM1ߴj|T8'9nbt֭Z,F{4(1`9wٳ_l=|  ۃI8IhPK%ݠM(բ|{%.r(K2PFm g+6O]} WkE p"m۩h-iʁ>=r_6] !&yjI׮ YW[˾hLLU r]qp%% RD.[0ZvlAl؅[~d^cagFj \Rlgܭc*ʥK%Z]ZR*mM'fOߗL-e{Yl =W,>/)řjK ؏*mps󿭎H擣VL7!]updVb7O)h<{6w>}Tz>/ WAr5(J ໼ PQ嘋sI}M8gۙڜ QOI8[@P%SΉpH#W؛d4VL̓ɝ[6~p.%oumnΉw\=P:il?[t?~, g$*!AygnJ joJ@+"0xsZgAIDCr 20} -R&rAHмNkCEGV)"C.YBpD<pDCqyaȣtՆj*57`qΥ Vk)S+Fʀ R/@a B@ QE #%8! h- 2YB5YEkcvML(/mu;!ЊyWCy'>ˈI]!pCpI 1 <`I8{O IHē M1jdC45@gZv;^r@kwT с[M 0 AG#Oz ?21D@PݴBIEPGڙ1Z'8H(?} `[x%^,YH#(b>K+xGT2*8g>;Xn3Z#l?ji$QʺO.9#+PnK JpIP8PfQdj77mb.ڻq8,S6c?Ƚ[<*o}ZV q)lW#"wU;J衚r_RJptJr?Mlvާ/ݲ/4=ۏvV`eS֐T+Y{Q0g:zHX%]a)_CsI!5ͳ6oix:s7t_܍>]` olf%ŧ7yj?[}*.ټl MhL׏˂^W^ {>n,u~X0*j:ӥlo,{輱glG\]{[ș*u{ '-z'TPrʎu|CV6x4M o2~Av^uR$u.?$|V`og\ù$pgI7 ҝoZTQ}F& V&>&j'~91o'ewj@SMo7/+uMYT[zُm%dWto;#π;ಙYUǃ~"u ĺ8HPҍςI0z?>5zvz 5Y^G_M)IֱiPvx< wgi$ /o㿖oK`!SɴNa'.Gsz=p$zo2[칍`vy?G/N&Ot03 W`}O mj|]{7L olayJ(#3x綛Yi+l`ѳt8cZȷ0߶lx8˕H믎iqMh`ge\JJtxsx~roK W}V"OV lQc#hK` ,3#o/wi u*\v%p$P_B^p2gJ3g`!9㚞 ~T+Ņw.nsFaTHJԯj*z.ڍ<|B^XIH!U1 :7 0Ib֖=*㸣8ZJ?$#`dͶ;*]"kC㥛HJ/x.}g-R/F,~,Rn1w TkXK'|/oZ]) 0ďJU.b/OtCG.s5,HjĒ~>iC!j镮J(>Lm;}aHĩ1 Jt?VG6&&* Y(kBl,ŚGaa\ Fi=B@UFHr@"noP !!$h F"Б"TڕT3T%Pqgvc *$8=)nq%bIIEDb"(2LcSā a 44DEaaÈ]lgcc:(zyMY#!J&߇f,lqD8( !y9iA7`0.~XGc%Gg]wc4j0FDP hQ ӀAE,bV*b<:0j^96C]m;ִ 60,Ԏ k/_.*W3 RޯUx+{ R1Un֔hw:#A!'JUi:mrLDZ.u4:9#kCxF]Y<:/云Z%5h"}>dӗ0>~Զ{&;l"n>^e߽;Bz4QU +\ZzP@Y jup&{?/B*+{DWżw{[{f::RBsA9̃h2V*GtlU鉒b7gE`iap Bg(5$l0}>k}!Wf|;QƘrd=g W^p'B]|u`?U?`0'kpCs s۟nɸKT(X(tb43eֹX&4%f dlxBxt5r' H3OfJ_g|ӺtP@T}@Z1XٶMQݹk :OrVM+b(waA0xϣY_jZ)x.$-lU¿Ts?c ]Ttޛ_^~ z=7"EaTԎ؟jDž& ̏]beJ-yJJwQ^8C=4xXS-ۣPSIhM.Ch&' g]`].$?΂ܟ\$\2"8 yD X&5y b q(jwũW@%fɱR :jd:'! B&%b12ʋ fQ"b40[*\:ol=Q]5Q@ф!BQDCbV&BG 0@sΓ 69aBpEDQNgYݳUt򇐓 [XM Q-ى*cr%Q3E\Eqa!AF(;{" I S6/R"IN{9۲9[Vkojz'wl`ExPjG+s7&.Ę> 1.\븉M4;1Ξ_g^sE=\| _Տ"kW#05QӺb( (nCUOϯ@|//s!H}$KJL;p7|A5CZ~U.m琟Kp6Sm 9]P/?k!,oWy;Z_ݸOQI/?29+ <zG!*ttU&hյ*6z~)DOםNT@!zzUGy<ѪdW[HIYiMeB .tv@$u) ,m?Uxjv& 1j4}TN9dsIP~.= m$>9]XԞXIi1*JBЎIY$DNJ9F jQ7V\y)A21ˆAu*Zy&fKmʁfWEV<:πuj:?2As.:C |ydY0Khy%..PObB ݵKmR]p߿IQj -$xgaSM- ϰ3*! ލZRBugxb/mSf5eۻS;z*!)wc.xP@'>ޭU9ތѻUҘpRuۮC;?td#\Q+}on8e?CM;J-2}|V6$PSC1!pC{mxIf-17ݗj>~ ">?8\WkzItd>8bc\ii\c6o]gv֭NߞT% 0A:dHQ!Ih(cCJ(*EGz'|Nض cd s< /}l'ikdS2|GtR0YÅ@CȓICy57#пl}fp|>ƱPC4BHP S c,"R߉ pMmVdGNLj 1ZwFGiPpTo0 0C҉tA۷GΦJ%35LLIC=[ezZd3vX&\8-ddG5Zf+5Ӥ؜@jzTv-b9+}LNo}xh~~S[ XIJUjndzi 2Fi|fAf^fMA6i`FMA֩PΫ7.k}UXmXjdފQIW&k t^UXqͶ UFݡwZphڶ <`]SMM@_~`ZnFs'0? ޘRudǮ~3W뤙 sh@Em@b@WB;JF>Iy 8 }I~!WN4, HNgГ Y/|VNj덑JGWC#Ƈ)(a*2RrpLO,a޻FQg3<%g[Ж7ge:Zn} 7~f(j7[+G{ͱ<[,ȵU OVf&Y]fbU6dg6*3d7,/#̤ՇiUh x؆eŶ=]mNJtH ĸ0!mVj;2]I~AY$Z G3r6MaFgCٟ ?d2d#{sd1}$)2 bb$1)S"URd%1fTeP$@(/f4soTuCij65xfB@wZU̍/ qCͪrYD rIeg b`c 2(L$婔s e*Sӟ,('^+ Vo=#GŎ#*%´Rf^iJY9r (? P֦⦗B]/1~3S9γ \v6N>x%f1 {Z԰vw^_ݸ%OQ ׷EP8ΚwKf[_Y8 397]07{+*(Z5qПш6H*uQw!IkO3>/nrZO.dp";q1A&{ 0:%G@BcSne.yŜP*D@K4$C!!r$F9\p#"Ir.3%(L(QB0. HAkxT 8%G,sH9gsLV<'Jv~rsR- w$k Y?པ{x|kiJUcHhיџ|_(CXS?ـރwFv}}*X/\e?h c.dWH &\+/56%{*R ;v($(5g(iRB1@%'EdGMJ7Ri|)C($(5'=Jor KuZRORsl(A"R J%ŝ@%'E9#}\z( '+{M^zR3&{4JHufBJpCi.7R ܒeu@\zEIQj{8J!uC)#MAE3#|fv*Ӈk/9W$%E*=Ť#dM#n [e~=@oITTOotQ<AHŵ".u?_}ū8BLi\#A^pPݷf$mVHa9^Tl7ofQ72'|>:3nPX hÇUܭbG*S;3%ҥD?\׾j>v^z_:R`P3;o?;Z"Qql(݅CdӇd8<$!ъFWSEOۼں9ܻgŔDHىKRORSL,?(#! 8$qd:|E !ߤH*U084VjcwOy bܿF `-Wo!2"z>T؎'Fz†L 7A аzZ:-GynUM=yS(ޕƞ^3Bʰ\!EOr~f ~EF;@1*+A~+~Ctgˤ.rWaae`9MN Px̟ JG[mʊh'EH-rn|Q= 6M͞GxQpKg(yԌ啷QcEt%KY[&C|2*ns w焍@(ᣜYMTt,x V\Xm2ddpf}[Wש#NT+iW S](&BezM 7tAG_Ab-ujv)!/AswcjsԢw Ձ tBڋMXۻg;z*!)L;{7nBugx<ڦƼ[xw!/1eVhw0^<::AQ6x#;Kkϥ@>#3b8dL!J/K=)JMQz(ujט_ B@Sƭ7R Pt'GRl=JonE$:coȻK uCZj~4J8<م ROR BIқF._[臩Uv>/SͫJu!_٧[AjBze[/vwy0wQb&&F!e2Vw8+|ƥMhX ꅅa iJ%Wo'Kk|aLZjOl8@"qu ԅX-9M POϢEGpWۣj7:m9F_K]^/ř!G*.,6rI =Lg@롺 &Tg{.hvQs)~><>ce̦~ZI\t鬼FBR@7,U! }.B$%$LjO!HRSh&wDsp9/mb.alsӯ?mZ҉5τqb #6. rn֝-շbԪjdjf<8z"V?L P"raՎ@JRdX@QkŽB `4Bȳ<^&TY3.$m;S>#ZG鐱'=n쩑{OC|O:"X3f9CU䙠"1T@NJPCK®thN`*QxjQLS\ad2%ISՅQ4K9BpiC T ST(Idd^T@<'l|2jOɂV? F3خV,,Wn 6 ]*687\:FdEE];ӢΞAָK!YE j•Wso%Pۋ@MI"lTB-ɀxWQo/Ϩu ޽bp*^I.aq7>oGXv)1>4>c IXP^()E,\n"MP LvquCj`iTn7ը ZG[=~ZHʍ ?vG\5Cb=qb"y؟l!q{kU8 ønM5ޙuJiY#0n|j"M)h$]g׭M a nu`Bރ op![oZٓ1p'#(dHKIe\/^*A;ͺ}\u_]x^6ER[SyT-; w|sŊoZ{]||FвQfmՄ79\՝X=c>jYD[={:GqwH)?UyBn0U(N֔`[TоҦⷝR5UM5BUt~ K{Au)y W>AW?j|0\)H66huMDWD3~'ÑQr@:a}FzY"z|IބpR`tws=zXN3B[[*bb<[M4ʦ(>n[|L'!ޭ]'uHxz&,䅛MIOm˘\~N:sz;ˍhuȽ6 L5ޜ1wirz;Q:?_={Gx晬`")(J:K&@1PY)}9ZIZ:.0^Dew)eV FWAd~e=&L%ѮS*UfQ&=cT4,ޔCTl)QmYfв$ R|., 1-Jk{$՝X=ET_lbg(?WWaieJPNàVSf}j̀%P!LL\E;%!@n:%QϠU]stHx:l;iLhi4VmG̀rjx%p$"3^Mxͬ>˵G+ |7j-6;V^ͫRGLt94`!/DclrRy79Ӊ}Fw/rJՙwݚnQ60s߻ 6~1ˋs &&Bɬ\;q#JfcqP(I0c-T)MBl?k':@ 6 l/3,)M^ 䂸X_7r vxwNm?MpN_??oOK~W*K͕ۇ1qb}n!#4;4/YFO?X2EMO"T@QHs|! >Q BҭJ_49KʩSB,Q{w\kۤF"d+F X ,M8/y44zL%ʞVFRr`v7w0+{8+ դ4=G~w|FIͤF {\I?_&-3aN3ĥ9;?hD)ZGyzg]B#Rd7P/-~߂,33ul6z鼮n;B,ʪN梲nݴ'ܹ}>VH6_ 9^~*+{yљɄ6cJKfɘri OE:޹Jc>OY 'HhwN}`[^|en۹G*;n __%g=zg5 g.uG>⦎ 3w:xi%okm'GuT]?Eϵ<%y*M1*n.$q[~2rj.|{g柜o~(N/_+2_۝a}Sm"}yXcaw?Jjs,qC/T5ٜ[{q/Mڋ/8Q5[B&ENipTLq0YBEʨV!Mt#r"IT@TF[W6xptn@H" %$A*U*YzrP5w ";z}˝~=Z)քV:E+ܣԒ^n 4/3M&|ORAmĩXغYo7^IXÎ8ÏAH|84xK@I$<{¢_S2`+3@'>re,,AX `5pY򌇲 crT\V%HFjлF)gu Q/uTRJ>M&̚ ?!աeM4ok^Tр"{1Oϣl3ϡ\Du eⴡf-,#L@@|Ѕvm] XhJgYʁh]*,aFSr$,KyP+:)45 0TdԔ@@ }ᣀ{$)4!>x"]-Gq)hJc))% %Tf6I)gN902q j}={RT8Q02%˨sd qZaB%AII5Fvu@RJhH|_jYdjSLad +%ƒgƯg[ 1uOyN Z" (2ϲ 33PI&39mec1y!j}Oyk V3>>ḃ-R,ԄCY4͖Xb1r+l<`[6}"GW6'_vQ~b=8O%$;+׮O;D"螪Bf?#N+~:cܙ=TEeHh}giF;PQ Pm׎yY9-&*)Xm@6uC6Ovt;iZA2^%4 ZZ1Y[j,ӎdKtAIT K]EH$aX+5W(k$9) n7Y=Eq۸TB Iǹiup45!R(j$.I-&MS('q^ JC`:b^"<\)v&ָR̯G+R! Gn}#JPd߿B@a+=QnaOcSRSoϓݻ%=;n Xa(^9rx?_Iz#`V\@uwtI}.G-P9U/+1b˘ۡƇ4r'o)M-M\DD7{6&hս+MUӨ_w.\޼[Ցޭ y&Ħ#C&([S bL3x!W ͻ5nmX+7~iV,j5ryws2.>|Z"cJ؋˻gգ}w̶7onuw}e.nnYe,"e^̋bY޲oMkt]fk?$rs B䆌cb>_SR{ l()!;=ԛ_l66"x\a{w{"V?|evaQ[o3+uFj=%J:)ٓ<{γgрX1&}k@{BUʐd;y< $N@B >ϓߗZ Mg+rg%RWT7 dGgPxP҃K o~*{ewP[}'6BW/f)QXwíb+Oߟo~FQ*oew-uz)T5}?\$lYDOw*wF~ϹȎ}'|8}Wh'@oWۭ:q?SR([f ^a`5I (_̽)Y͇oﱛʐc(w p_&dK-,z* hI󢪌\r{;c1S$ƪRz#`(([<7mxr`᐀`!=-ԡn.ǝp9.$׭@Iu,Cj,W6 ˉH+Qb/h,#G"}tCDu!lgfE1ViBEyj<iLv'.jR~"o\{q1Z"Q*#osڪ_UGL"QهaCe$C1dai^+Țmqms55N՗!5wԗڿX#[4$/fQ, DGf!Qh QoiL1Ơ?ھAg,m_ MM!>4:n֚zĘNgTnGݚEz6,䕛h#jKC Ç$oA[wgT"nYxsBGEb [WnmO_teV2^^)ZiBA,tY2G%y$$2K:Y,ZA![1>NGVs7I.qVK~I[)DֽTQX)V\J'm9fRsU1_5["!{WHqNK}/NcJhY)R:+eLL-XY`Jf(g@غoo/"Xo^ueN勺Yw;eFb֢31xaE+1LpENEo^KWb>n鬛<| խ_k}ah¬\ȕ 8~ uv136O?1y<~x[ڇ~`fB6BHMx"P%anc!;8}WwaZܥuk~Ӻ!߹6)!.e޳n2uhݚGuu;60Xfݚynmhw'FğU`E3L &BV[/Fnb|[igUn@iklUU,Z:y XFB8(֐SʑWb"?Id}@SBb/,s U*|M!SU;SxEn|H***y^ }u BjbjEI"Yw~ V4&LDYEtI$)O vyrƔ 6\8ߐnTUxme4yM&հt 92c'GF TBybB(`$KmS9Tg}8Av^<^j Q-&NJ-hjҘWpQ(41)lL9 NH3E2Lc#M4ZRN,KY(qHX4T3-E[-i=RqQbcCX`( JhMb[J|*X1bqȦ$ʼnČ4"AH$Hwőu/E@\;QxOSXg~*黗YwIfSf40»> xE'SUw`!Lp罃~k 匶_Bkwqw]o$B(ҁA=;Q!1B/7v][}6)-ϋw*0NZkS1n[ym4K8vZע9-Q??ee>$0[f7 )ԝI,T@Rؠ{lK06{><9 X:L)W oJ[Q3t* bW;3E vNL\`yW-{Y4[=k3hSv~*+O&N B?QU%cMd{ܜ]֘ rLY׻d]r[$SϪ}폯ee]"йc0WMyدT#И"J*b<`R )f;ֹHCB^`%؎d(Ώ/`U ߏݛ>&s5>ڳgr_;{69*ݛ6<7\Bq04vFؕ}: BOb.F5EI7AS ! bB#QlfcMgv˜czvdy1ƛF$y7Yćg%2'2dH+L8Aʻ>SΧHbk+.16A2}B2fHcBiM K+MV@-dFٵr7'*d&&6IP+V U(,VX%`Tt5A#=3RCL1v2~G |vgukwAqe$gx$ǟ߯Gð|{|H2v&O73}>~3_ h6_W~-1.ۃ(`0E8/~Z|e<*W˖1AádT/g1V5K"@>}$$H֍ރ[1(uÒ4řgcki|.Aajg?)M7ӲUTy,4ťt :]Q3>.)wT_h:Hg'PI ^{xg"H0-.do?ud6ЈNW}`2>I5j0M@XO.ϰ\D N 8xTaFQ$ sޱF쉯jji"R?P .   6Xł+G9wlj!"j9\}0H2u6x A\ _޶ȟ%m jD\خŃT"če,$1Ɣ Lcz$KKb')Ne)c"M,udv5J%d* eCaQ(al͙FdCF T+DTbتT[J$e-7F4J-dj"F|BLbH8(MqdH.{?qHױK ZXeSH)@P^hXNHr=(>mz nJJ$.ީd qX|O?!ǚ3x~ G$;`,aֵNr[9ĵy/[Awk6h9EF_Y֜f{ne @{&py~xC^=S۸ ɎWYeY=xWS;55`jq1ƥp;.eRw>lupJD!Rj:+vZ)bLӈ b{DYf8%r7_&P2U"5L .`{s81bFX&R S,MZ'B)BjM)FPZ?ZׄTO\K +Lk%`jəX{P2͕SDx\"nzPr0ޟaFex+UQ!pWBQNhA4{؝̬yI!x7\PRJDٚ98PB槒]JSP9~s!Xaup $Nc")"yޱeXp|^RuU?X-ǓuB/FJΦ[˰j:5j!pgYy!}W,뺖^n1Uo= jI]*^'Rm=#ٍ,)dB \VsAbh}NN>c/+& +M.s.^ZKk  bD58_e&iYf ~'(տglL[%l&ZԤ10JٻFn$W~}8vmy±;kz< N5)QMRv77QdKEPDP[~ZT LXȍ7൱'UF9bJFDŽ_ދߓMv75ů$  'ݓ75n8Roaɸ/׌eK5&j,Aq,xC˥k)I5ͼa8/spuˋ-E͖oE2bty|்6ywߧJM)]k^KMIn ND[}dиܮsU&k&A$^Gfr#LpB\kĈ҅SA@%IlX3VIՖQN# ׯ^]X)WZ%%E7TZDMLLE`XC x(DZdKo:SdHa^V"i `I0Z~K>'+8 qY^òE5Bh|hci۟>)o')JYB6F(ͽpX.wfzci7 g/燞q=`9AS էJo:w%:@3qvcC;gMz'` n3NjOT ҫRWP-'=F@Z$:4{qZbR?FO|\#M jAOy,WvEWFWКx>Jo.g6/=w"/%P26>U|P%噡aG厬gO ZdS6 y/7I͒M0!ZA\ia3s~-U.3e=YDl_y7[ [ JL].Щh,nU6Gn̈́zZN;xS"n- [7ѽmF#hyDlN_Osw{1ݾmOZW^T@G Tn]/v6pa>O(:޽NFO(հg'wQ.}JHId[o6FOzI'ۻv\Ngqa',vѴW3xP:z4_ӏ|>nP'baJ:J\eѺ}UN&ŰQ˧ UJiO,+EBZIWX8)+}؛~11\3sk$t%E8F3H/s6^T!:n%}oʣ"}xTbɳ|dm;íRXRjiA6=lZq4!嬮e]d+|G̻|2W|a/~QV(+yYl`i [oq.r@$Ias[BK)X E' XG3wlZI^!?MZ:^.j`%##U zZցVNΪ&}W J*U 1n'm /Bs8=^yDŽe_@FQ9G*bf)Y,Y!> z4GFt+Ͳ,<}tNF_hr\wF_sbGsׅaٷmN&jər\h)gPi/76V~[;Y!R =Z)ah^!$ڴJNї8Ei!Dث4c]\ͳ D>WYհ ϻ37% GA'Gb<%,a1 {kQ~E-PqME 09/#Fz"n0-9@%3v֥8=`-BxbJ*SP+uup"9 + "8"1(Jۭ5ϗ>Z3N^bxv2Q_Ʒm2J[AdLj E)H94vy8La:CxD0JR0g*)g [߸. ԟO<YԬu,z`jf'=q9Φ>mfަċ]vr0:_AD99띙۷#"(d y qE0b0?Yˁ҄}&*rH!6 C;j2D-ӆâb>|~@)Srs y!$D008!I" M5s8}+wgw]>n8709+ Z#w0vZ$[/ʂQ9Vahy&A+OsAx`kL8$D-~Zsw,"g)|]5{8FEiC )3RN<"!w l"LDd$d\ )}pîc'=[R_{?*9;#bM"2Fޑ[aPko;l%U$sbFۉPvP|Dl.v>{e Ӱ%oZ7r/>?ɱ?.G<ﹼIo.ďwsosx} l FgaE0D V&#5]jJ~f SW:(G.*RtwynR*RԎw|bi)t+իPr !ԑndS?w_TMkۏ_Sw=(4?WI'| k?gp_o&S! .:u\{.LD]Jpv!%uB ^{ǵN`[#2(IM!vsW{#XWwC2f|;%H rJ&*: WC<XQseqgnAԧm6wA2YSHp!F0dM iHc0G00`/JPcZs{at6-#e;4rn>*d{2 tv-c矓 q(K~xK¾ ^rbRV퉚 0%ƝE*$qJLJ@~njQ EwI잫S-kd(}2|\ 3'JR;'aRR/s.? L`T!FW(C߇QR; #-=!X.SB٘tx(3XsN_$[{%%S8P"R؀ Fe ``k_0)zW6M],T6v!n4#MIi8Lo9Da; .gN,NlQͭxJKU"μG61naZv#GORFDrh4x&=v L8JeSC@t:2DOXYaNai]Jk{35<4JA{b )"uBb-#8(6)҅7M}\Ѹ- xC\ D& * QO qڍ0}Y\?XEiM))][s[7+*MBĥ4RĻڋS3ySJ)Er~$MQ EʴW٦(_7IL0 Hj H2y2v MPDZ@ gMΊbR&R(%TKGL7܇UT.*T;k͌rڌ'_Ōؾ̫5yvsEZ^JXzjE׻OCKȡ߿oKTjgy피2aK{WN.gņ#?-G#}^o %w#?_|ǫ5XNn< s& i lx4hgS] [k?膼{Us\[t1Fe%49gL\Ȝ"4Hg >\/ಡ(0e=QdR"$"/+P9i蔢㘱%z~ x%ˎāT %$).F+ݳ_nJ])^;bx璖lCݝIД ̓ 4CGѮHҸp>UW@F߉2mf=$v}7-Qd <$ TaΓ&ߒafvTOLy],xKS(2ZBmQ>fɢJԶjdh8R\gjQUr9+ LHo3)*?+`I97'9jl #]O`Hkl?umjfOE57MCjMJ7rL>`TA1H.)(eR\:|I C=pڝ_Y}:k7AU T-˚(} D>S<#b .Ldq",hY1!+Znm}>1lwRH$怼8R5vޫ$m}x3;*oXtW\ZCiTz*9$M5 5.G]p\}퐠2GehHpS4Qz?kQTEOK(jĜu/>Fۈ1.Nʃ'˜vh$BI'bSœbt:4*hC,=s%sqR<c=8J!=^th.Oݜgݸ:eIN"doɪxT$w`ƞsD mCD'ϗ9)83tnNpyfeWs.NnL&2yL;I7X.@'5G竫Xrs{*/\o-n~%H ^|'/n uvm6fg󹱍! ^z3taU{iҠ吝~kSק} {e~6vh(KIJ=M1=xTWΠ!)w.)U[^}1Ҽr)̅.Ia($ bȨC9%ƳZf(₃8$u*cdUg>f_qC[hK[X 2 0YKy&1]5}>CJMOwP]}%F+'bIlej< MqLe Ɂ<W>xDɫ0/!<[ߧ9 e[z{k_.~~W_]4Z^A|ocGKdnՒ5C<˹~u.!n/G[Lgysŧx}}j:߰c-TpwmCB(Yf )w%k1̃ Rȝ]-")x3_'i"}nH9qe7Xۙ8❜w&«'#$|1 R=W8n6}.r$ wj=<;TJZZ^ڹg]{/|)jiM~3/Uѹ`Y,Wc!]jN﹪vl-2^l.CMlɬzr\de!n8YUIKՕq?\l3Wot+i竷ȑ/O=OwSMάIU{ +ۺۻ5%iǘ[[DadO+D)yXꣴqڭ+ ,qv{E~%>v^÷vch5ewT Yh4ht~1 FB6〃}wƩ&yrZGXי4)^#9Xڙ_s2j9e?.wP`MG{*AGSj]+-K 8꬐2cDA:O̬-/ s6 L_bQVKIe<dhx>pM@ֶtuPiI `jIh)wwճjL}7U]QumtL6fH0bBG ܕ2I7eny`\/n~jZ$ ,3 1*.AlaOx]bxާsFؙ>zV<2I*),oDF`ȇӽ|aUcZFpFò)񊑆2HKJ'ǹti7]׵M~}]N/&u E~d.a5ی0|S-ġɝYV"[ 34\qDw+|WkMͱ030(B3gQ&BZ1>Gw}~2wxo2|'ZRrV[:Nj, ?8?&B¨ [e70B-I h)U+^)=m|ZzLHxDx+SCrĤ yɂ3#K\qTR6ȾB'x?)d+ө=׽`5E I:[Ӷ/Ps22D2Pl{n8Yn ⤔k1\Y=AZQ'l ^^qN~ ҃R3t¼-$NA$ *-/T:֣IG~(ѹB.@Jҩb#0)3F;lS8m(f#ܙFe q[Sʎ잳pƈp%GBM_a8G{I`ڗ\"3:Z }DzC)ꛑiBIL&2{PH$m$/Fi+OI\aƠ-u2M.O`Uܵ H;6sǣ g?7J%AGHfzJC:#"o ϊ\E +Q)&)mct)$>6%5Xjj&\73Eob ;ZWQGq1*iQ37,a 3撡T>+`C颜꠱G`\53y&ŤÜ 2rFrs`:pɄ9AQB]֫.fUU'P*Fэ}(CWr=Qѹk{?zZOr K-J5D"tۛ#X'cBA%#k|OCFM&G/ t$,Y[SZRA)5Y`~Wr䄐nKT)&uhrH0i{e5qƶ&ٻ6rdW b(b< &34 y5m$I?Ŗl٦-[QDŨLR,`@t \ӓ?q <$c58 ($#R>'&v0v{+W̟>PtB#:9Acr,v2#˃{`2\t7JkVXA;D}P X%x8XtJ il-94S L ¸IpxNf.(AkKZE O5xk>`.UuTpӇ_?Q5g53AXsάRY߫~vQ 4P)B`Ao|mjE(F+JAi?ΉidZ69ӨΌktsS:Ŧ/G5_?'hy񴋓TC.<&b>Oߒlb1p~3_q7O @;hf/~xD%OTk_a(ٜZ^y9j7paѺ*Dsp\m9Y5'k<zͭkno_(',xSi 2M%'ڭWj!샎ґ԰%bYjJ=Ɓ;ȵuPKfkzUw1)#:V!0u#TǺ9xaao&X!0֓%J$F9V; yKl| yӠ>vȨtV@łbJ{%"=.7UV8&Z>P6(L_l) >3d4 \s\:#sV2c*ƁլlsŨڅU[VQ̛~J4x9|Ј1ɸI`]&%Ece+wYX=N-f,A-/R?AbY\0яG ¹mvrր1͡[7^RKmx Pi*z^ZEcB&L"Wo,,{FX|ͯTg 6O1,Pʮ.Ώ/BqL<>]<-c I hka<]Ʋ5IH%^/{ﷱd(^/% @hƤ֯_& LRE;_HqDY^b,`n&/f󩆣a_zkt܉,tLbKUEHM>}}cPt%wXK SSd}9Wt[>./fgA g,ƈJar EGchH&&z.|Tr*V[ [Oi؝[\->?qȡoc@{+CY#G_x6hѧ)xIL呢)H"ed"1*eu"&@%Ղ"{#Hi2!`J>ۀzw@VT+RIm1ɿ$PnRLZ;Eڛ݁u_&h_ 7l+~nhvK]q_Z2Q' ;$1.KAn,C"jn*lMPaNB˝5<q'3]U:6\o5ELHLړgޚA y6 BGϊ8$T$3.noe +)b$R0DV&@UF9QM!"U\aµ}4%}m^̪'&E BD6b+i:'96A4{gI:StXAED,G cL+Cw'm!Ix@w&# Wz"m\1+n+{b W, fVrfo hʅqyI!)MO RUP5#FX0 ramw@ z({_:v+HXO0k傐Q#\c[B7!e5[JA3 IF38yUĐSK0Ikѱ5 [`;곀=kѫ 뵰_T:2k.(Sj:kzy[~F4N?C<~Y!vJ9+y ;X~[iV|]R5 +JV+G+Pfz`Pf'V_m`#PVVj فuy}L.Ly߯2uXo+eJY5띟J+ҺR.ʬ׫˝R [zTSc;dx(TB )P/N4:iRHJE&g2|Q>j2 `T"|]Osɨy׳MŁ(49%SZ1*x\NJĪt2Z1lMEd:-:B΋w1n*NYvu9jv ao~"N+vT$$Z˲) ?iGNWu?NmX3 heYPhlJL?mZ\j|6<$sFL(#IDut2AicQxC G㯫1s2$o*PI#Gc&eJbeTP5sljhomY@1 UZՓ wfm拏> YMCSmz_5ru;q͍Zr7j+ܨs=zߦq\\ƗƳ$ 'e^67=!-t~,-0..37:Ӎ4jޮ13mp@Ϋ(w)дX<طm2x3eRǎZBhj۔ o;wƓ5>y0Yӆ"ɍ0:$kS :9>v" ;VK7yJ?Ϊ(M.ΎG@Cb`h1b;ѷG&8 q񽿸9ᵭ ~?~:wG7k ϸel}o.t3u~}&$s֊\Ӧ$@ou$ͯ y&ʦywS{RndPf:ힱw{8}Jv)ػWn-l s4ŷE=˴]S fF/XGn u=&qȂ* oDf1 WJL n5MKRFZVKo̟Ɠ 7_83Ngңj4nQz:FysgOGu{mvx4?{t67=7`,Oc5fGhGW<͋!r4DEmVfPûosdy{|-goZ]Y]{Ecv_-B`v"2SE8hӨ2X7-LӴed}["⏦u *U_y7w8~>T&qg?tM PlEʦs.Ȧu"ըYP{>Ol?-ߋD-8rcE}+mʞeikc ݣ9 $яv/Jp ڼJzF^x|i`}K[4pЕݥ./Љ5Lýͱ#嬎t`/s2a=]ҕ'*aL^>l<Ҡ<xAIBgLa*0-%LYa@B%&9mQX;pX4bp8܎ybnQZhLs]@_(*Uڸ L+a$bs&-V%(HcHZ΋'j^\[=u:{>CrqZUz3[(gt,ꉌ r?/"+*{d'#UYXLKT]MӔ)L,eB!a,2QD#3uo[zGA,pz2 La8>ރjS ,) `h)HCu<#3g/&y_TEH(pT/tCu. ̅*-\ 4e^1fB)QJй2YOATnyr{6Mpj$7혲O~Bdv_Wa_o/GSnM6\ ?͜;L[>s}̉YSW )fhi+y)T:3Sh :P ?OMsآb_n~)p N'Q4zZN'JF j@\nȝ/& l_ŽH1ǔ!!sU ê[SɍL% (b@J F-VBcqhd~A0)p`(,jԌ>QE-AP_ ;K2ƴY.2xl.Ue9K],fTLU,7LuL,j@a,s( J䖧&!e9r-9 JE-f-42O4ыؗ}?-:-R*ޅtAk-0͗Y(/gyR9e1IqtOV-]?R0J3K?-J>9eM:)i3;ǟ_0j%oN2ZyۓWI&lb,^ax`=QոDp, Z!%6^V9J. pHr +;E=z uAs}ܓx¬ @5*cΥ2 4fR 2~92P *a-"砵KKykv#-ɪޅ$SH ɵ)M h~\ h3 w J{K2ѓqk6|*eL0IݵCѮ7%YxKUYZn(zkrtm$ǯ+ذ{ҴBpߪHȄ+[nY53ǹE^t 1z\BaB(l j4݀nГa;$>K-=cbð=I_10 [I4[=!(୭;VI0Q͏{Efu1svb pد2f;-p)GۼCD9 =G,>@ԲGj!u~jm4_`j?yY숃ŻT˧Gg`1X+ y+xx.׸*R4G {tLQ.:˵kD|Dk]~h.jI&ӣSq7=yC(mDC"Qq o#`, J\s"}D! !ȹjDy.zsv@; ZQߊ`610ݡnPGWvZfdTkZzW HC-,JW&S .;Y U*1q.}&Uݺ5zu2/ 9{W]-~pJίnfTNDVo8t37K#?b߱m-^\^N%6mWzYDy-`;ZoZ_x㻶%Tw]>)=q3g(j>szSI`JvŁľ#Drx-刧vC9F񔁃8nĺiE`|G[kƳݢ6n=p3g l{xb:N:?}BM[var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005166463115140075461017714 0ustar rootrootFeb 02 08:56:05 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 08:56:05 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:05 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 08:56:06 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 08:56:06 crc kubenswrapper[4720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 08:56:06 crc kubenswrapper[4720]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 08:56:06 crc kubenswrapper[4720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 08:56:06 crc kubenswrapper[4720]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 08:56:06 crc kubenswrapper[4720]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 08:56:06 crc kubenswrapper[4720]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.583934 4720 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595807 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595849 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595860 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595869 4720 feature_gate.go:330] unrecognized feature gate: Example Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595909 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595922 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595931 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595939 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595947 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595956 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595964 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595972 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595980 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595988 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.595996 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596006 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596015 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596025 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596038 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596051 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596064 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596075 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596085 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596131 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596142 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596151 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596162 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596172 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596183 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596200 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596207 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596215 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596223 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596230 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596238 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596245 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596253 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596261 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596269 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596281 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596290 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596299 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596308 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596316 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596324 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596333 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596343 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596357 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596369 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596380 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596391 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596404 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596414 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596425 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596434 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596446 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596456 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596465 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596474 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596483 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596491 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596500 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596508 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596516 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596524 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596531 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596541 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596549 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596556 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596564 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.596573 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596748 4720 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596765 4720 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596780 4720 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596792 4720 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596804 4720 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596813 4720 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596825 4720 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596836 4720 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596846 4720 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596855 4720 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596867 4720 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596915 4720 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596931 4720 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596943 4720 flags.go:64] FLAG: --cgroup-root="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596953 4720 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596965 4720 flags.go:64] FLAG: --client-ca-file="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596975 4720 flags.go:64] FLAG: --cloud-config="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596987 4720 flags.go:64] FLAG: --cloud-provider="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.596998 4720 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597016 4720 flags.go:64] FLAG: --cluster-domain="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597027 4720 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597038 4720 flags.go:64] FLAG: --config-dir="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597049 4720 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597066 4720 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597081 4720 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597093 4720 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597105 4720 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597117 4720 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597130 4720 flags.go:64] FLAG: --contention-profiling="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597141 4720 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597152 4720 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597164 4720 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597175 4720 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597191 4720 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597203 4720 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597214 4720 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597225 4720 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597236 4720 flags.go:64] FLAG: --enable-server="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597248 4720 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597263 4720 flags.go:64] FLAG: --event-burst="100" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597276 4720 flags.go:64] FLAG: --event-qps="50" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597287 4720 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597299 4720 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597312 4720 flags.go:64] FLAG: --eviction-hard="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597327 4720 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597339 4720 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597351 4720 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597367 4720 flags.go:64] FLAG: --eviction-soft="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597378 4720 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597389 4720 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597401 4720 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597413 4720 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597424 4720 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597436 4720 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597448 4720 flags.go:64] FLAG: --feature-gates="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597473 4720 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597486 4720 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597498 4720 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597510 4720 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597522 4720 flags.go:64] FLAG: --healthz-port="10248" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597534 4720 flags.go:64] FLAG: --help="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597545 4720 flags.go:64] FLAG: --hostname-override="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597556 4720 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597569 4720 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597580 4720 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597591 4720 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597602 4720 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597614 4720 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597625 4720 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597636 4720 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597647 4720 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597658 4720 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597671 4720 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597681 4720 flags.go:64] FLAG: --kube-reserved="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597693 4720 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597705 4720 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597717 4720 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597728 4720 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597739 4720 flags.go:64] FLAG: --lock-file="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597750 4720 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597761 4720 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597773 4720 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597792 4720 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597807 4720 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597817 4720 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597829 4720 flags.go:64] FLAG: --logging-format="text" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597839 4720 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597851 4720 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597862 4720 flags.go:64] FLAG: --manifest-url="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597872 4720 flags.go:64] FLAG: --manifest-url-header="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597931 4720 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597943 4720 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597956 4720 flags.go:64] FLAG: --max-pods="110" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597966 4720 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597978 4720 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.597991 4720 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598003 4720 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598015 4720 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598026 4720 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598037 4720 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598063 4720 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598075 4720 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598086 4720 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598098 4720 flags.go:64] FLAG: --pod-cidr="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598108 4720 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598126 4720 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598138 4720 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598149 4720 flags.go:64] FLAG: --pods-per-core="0" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598161 4720 flags.go:64] FLAG: --port="10250" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598172 4720 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598183 4720 flags.go:64] FLAG: --provider-id="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598195 4720 flags.go:64] FLAG: --qos-reserved="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598207 4720 flags.go:64] FLAG: --read-only-port="10255" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598218 4720 flags.go:64] FLAG: --register-node="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598230 4720 flags.go:64] FLAG: --register-schedulable="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598242 4720 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598263 4720 flags.go:64] FLAG: --registry-burst="10" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598274 4720 flags.go:64] FLAG: --registry-qps="5" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598286 4720 flags.go:64] FLAG: --reserved-cpus="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598299 4720 flags.go:64] FLAG: --reserved-memory="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598313 4720 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598325 4720 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598336 4720 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598347 4720 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598358 4720 flags.go:64] FLAG: --runonce="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598369 4720 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598380 4720 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598393 4720 flags.go:64] FLAG: --seccomp-default="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598406 4720 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598418 4720 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598430 4720 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598442 4720 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598455 4720 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598466 4720 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598477 4720 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598488 4720 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598500 4720 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598513 4720 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598527 4720 flags.go:64] FLAG: --system-cgroups="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598539 4720 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598558 4720 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598568 4720 flags.go:64] FLAG: --tls-cert-file="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598579 4720 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598596 4720 flags.go:64] FLAG: --tls-min-version="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598607 4720 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598618 4720 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598630 4720 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598642 4720 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598654 4720 flags.go:64] FLAG: --v="2" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598670 4720 flags.go:64] FLAG: --version="false" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598684 4720 flags.go:64] FLAG: --vmodule="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598696 4720 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.598708 4720 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599051 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599069 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599084 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599095 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599105 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599116 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599127 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599138 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599148 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599158 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599169 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599181 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599191 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599201 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599211 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599223 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599233 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599244 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599255 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599265 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599276 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599287 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599300 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599313 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599324 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599335 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599345 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599355 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599363 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599371 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599380 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599388 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599396 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599403 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599413 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599422 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599431 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599439 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599449 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599457 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599466 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599474 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599481 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599489 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599500 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599510 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599520 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599529 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599537 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599545 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599553 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599562 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599570 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599578 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599585 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599593 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599600 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599608 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599616 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599624 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599631 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599638 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599646 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599654 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599661 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599668 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599676 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599683 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599691 4720 feature_gate.go:330] unrecognized feature gate: Example Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599699 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.599709 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.599724 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.613319 4720 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.613385 4720 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613538 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613560 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613569 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613579 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613587 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613595 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613604 4720 feature_gate.go:330] unrecognized feature gate: Example Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613612 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613620 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613628 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613637 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613645 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613653 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613664 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613675 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613685 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613694 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613703 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613743 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613754 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613765 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613778 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613790 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613801 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613812 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613823 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613834 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613845 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613855 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613866 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613916 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613931 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613941 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613951 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613965 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.613975 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614021 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614037 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614051 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614065 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614076 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614088 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614098 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614109 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614121 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614132 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614142 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614153 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614164 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614174 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614184 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614195 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614204 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614214 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614224 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614235 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614245 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614254 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614264 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614275 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614285 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614295 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614304 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614314 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614323 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614334 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614343 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614354 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614364 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614374 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614386 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.614403 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614695 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614719 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614730 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614741 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614752 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614763 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614773 4720 feature_gate.go:330] unrecognized feature gate: Example Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614783 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614796 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614809 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614822 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614832 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614845 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614858 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614871 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614929 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614942 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614953 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614963 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614975 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614984 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.614995 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615004 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615014 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615025 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615036 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615046 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615056 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615066 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615077 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615086 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615095 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615105 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615115 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615129 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615141 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615151 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615160 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615170 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615181 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615192 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615202 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615212 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615222 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615232 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615246 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615256 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615265 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615274 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615282 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615291 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615299 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615307 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615315 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615322 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615330 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615340 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615349 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615357 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615365 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615373 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615381 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615388 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615396 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615403 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615411 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615420 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615428 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615435 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615443 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.615452 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.615467 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.615808 4720 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.622170 4720 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.622325 4720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.624095 4720 server.go:997] "Starting client certificate rotation" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.624173 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.625249 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-13 06:32:54.385740064 +0000 UTC Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.625387 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.654920 4720 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.659352 4720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.661782 4720 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.678805 4720 log.go:25] "Validated CRI v1 runtime API" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.723447 4720 log.go:25] "Validated CRI v1 image API" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.728836 4720 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.735100 4720 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-08-51-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.735171 4720 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.763807 4720 manager.go:217] Machine: {Timestamp:2026-02-02 08:56:06.76126242 +0000 UTC m=+0.616888056 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8eba435a-7b37-4df4-91be-d95f0b76d6c8 BootID:10ddc092-4c99-4e64-a9bb-9df8e5d5980d Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:91:78:fe Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:91:78:fe Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:50:e1:01 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8b:de:97 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:18:dc:fd Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:35:d3:5d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:54:e1:43:9f:75 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:06:89:ac:e2:a3:fd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.764395 4720 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.764736 4720 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.765516 4720 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.765946 4720 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.766023 4720 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.766391 4720 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.766412 4720 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.766976 4720 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.767032 4720 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.768266 4720 state_mem.go:36] "Initialized new in-memory state store" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.768820 4720 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.773652 4720 kubelet.go:418] "Attempting to sync node with API server" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.773698 4720 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.773739 4720 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.773768 4720 kubelet.go:324] "Adding apiserver pod source" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.773791 4720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.779123 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.779134 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.779272 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.779340 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.781231 4720 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.782378 4720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.784119 4720 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786028 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786076 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786094 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786110 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786135 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786153 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786170 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786199 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786221 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786307 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786332 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.786346 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.787163 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.787965 4720 server.go:1280] "Started kubelet" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.788116 4720 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.788325 4720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.789323 4720 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 08:56:06 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.800300 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.805186 4720 server.go:460] "Adding debug handlers to kubelet server" Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.804015 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890622581d1b150 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 08:56:06.787920208 +0000 UTC m=+0.643545794,LastTimestamp:2026-02-02 08:56:06.787920208 +0000 UTC m=+0.643545794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.806315 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.806366 4720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.806527 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:44:30.683325739 +0000 UTC Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.806959 4720 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.807003 4720 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.807226 4720 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.810359 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.810489 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.810583 4720 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.810817 4720 factory.go:55] Registering systemd factory Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.811283 4720 factory.go:221] Registration of the systemd container factory successfully Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.811637 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.811870 4720 factory.go:153] Registering CRI-O factory Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.811943 4720 factory.go:221] Registration of the crio container factory successfully Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.812067 4720 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.812117 4720 factory.go:103] Registering Raw factory Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.812199 4720 manager.go:1196] Started watching for new ooms in manager Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.816378 4720 manager.go:319] Starting recovery of all containers Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824285 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824408 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824441 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824469 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824497 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824524 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824554 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824581 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824616 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824642 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824669 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824700 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824734 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824766 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824794 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824823 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824849 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824877 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824942 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824967 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.824992 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825025 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825051 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825076 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825103 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825128 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825159 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825189 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825217 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825246 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825274 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825300 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825327 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825353 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825382 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825408 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825435 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825462 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825491 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825519 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825545 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825569 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825594 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825620 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825648 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825675 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825700 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825722 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825740 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825760 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825785 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825813 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825848 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825961 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.825994 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826023 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826052 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826076 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826104 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826131 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826160 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826187 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826213 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826241 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826268 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826293 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826319 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826345 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826368 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826394 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826420 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826444 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826469 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826538 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826570 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826593 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826613 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826634 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826656 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826678 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826698 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826717 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826742 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826763 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826783 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826803 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826823 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826912 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826935 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826955 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.826974 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827023 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827043 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827062 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827082 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827102 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827120 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827139 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827160 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827179 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827198 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827218 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827242 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827261 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827345 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.827371 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.829871 4720 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830003 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830041 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830073 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830108 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830138 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830166 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830195 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830224 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830246 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830267 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830307 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830334 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830355 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830379 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830399 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830419 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830439 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830460 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830479 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830501 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830521 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830544 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830564 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830586 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830607 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830628 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830648 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830670 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830690 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830714 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830734 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830753 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830774 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830795 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830819 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830843 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830864 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830916 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830938 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830959 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830980 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.830998 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831019 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831038 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831059 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831082 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831103 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831123 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831143 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831164 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831188 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831210 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831232 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831253 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831274 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831301 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831321 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831341 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831364 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831385 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831466 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831491 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831517 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831539 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831561 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831581 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831604 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831626 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831646 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831667 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831690 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831713 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831735 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831759 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831780 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831800 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831821 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831842 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831862 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831908 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.831929 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832004 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832030 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832052 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832073 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832093 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832114 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832167 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832191 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832211 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832235 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832256 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832278 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832299 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832320 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832341 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832360 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832382 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832404 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832424 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832450 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832468 4720 reconstruct.go:97] "Volume reconstruction finished" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.832482 4720 reconciler.go:26] "Reconciler: start to sync state" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.847702 4720 manager.go:324] Recovery completed Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.869052 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.872455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.872569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.872624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.875113 4720 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.875152 4720 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.875282 4720 state_mem.go:36] "Initialized new in-memory state store" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.882680 4720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.885368 4720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.885491 4720 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.885591 4720 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.886247 4720 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 08:56:06 crc kubenswrapper[4720]: W0202 08:56:06.886627 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.886764 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.903924 4720 policy_none.go:49] "None policy: Start" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.904977 4720 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.905016 4720 state_mem.go:35] "Initializing new in-memory state store" Feb 02 08:56:06 crc kubenswrapper[4720]: E0202 08:56:06.911022 4720 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.985336 4720 manager.go:334] "Starting Device Plugin manager" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.985421 4720 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.985444 4720 server.go:79] "Starting device plugin registration server" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986158 4720 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986192 4720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986380 4720 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986560 4720 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986590 4720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986537 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.986804 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.988542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.988595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.988609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.988797 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.989125 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.989183 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.990958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.990999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.991019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.991322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.991346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.991359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.991493 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.992155 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.992234 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993692 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.993991 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.994266 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.994312 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.995904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.995938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.995955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.996199 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.996233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.996253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.996409 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997033 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997074 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997742 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.997782 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.999340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.999384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.999407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.999651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.999711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:06 crc kubenswrapper[4720]: I0202 08:56:06.999730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.008068 4720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.012268 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035194 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035218 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035246 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035333 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035380 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035437 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035466 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035497 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035699 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.035732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.087120 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.088463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.088524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.088705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.088762 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.089616 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.137430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140417 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140560 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140667 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140805 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140846 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140912 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.140960 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141004 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141082 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141479 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.138110 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141763 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141823 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.142030 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143137 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.142036 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143316 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143370 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141966 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143484 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143634 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.143698 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.141924 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.290311 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.293156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.293428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.293619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.293733 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.294510 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.324955 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.346143 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.374783 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: W0202 08:56:07.383358 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bdd8ad13e56f36ddaa975aa9ad21ce44ffa37e66f69849f0b9be8c9c75edafa6 WatchSource:0}: Error finding container bdd8ad13e56f36ddaa975aa9ad21ce44ffa37e66f69849f0b9be8c9c75edafa6: Status 404 returned error can't find the container with id bdd8ad13e56f36ddaa975aa9ad21ce44ffa37e66f69849f0b9be8c9c75edafa6 Feb 02 08:56:07 crc kubenswrapper[4720]: W0202 08:56:07.403089 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6bc9f05f10f107dc71a8583a70aa3c85df3a1575b27b859e0864e5c2ba0dbbc7 WatchSource:0}: Error finding container 6bc9f05f10f107dc71a8583a70aa3c85df3a1575b27b859e0864e5c2ba0dbbc7: Status 404 returned error can't find the container with id 6bc9f05f10f107dc71a8583a70aa3c85df3a1575b27b859e0864e5c2ba0dbbc7 Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.405423 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.413667 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.419492 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 08:56:07 crc kubenswrapper[4720]: W0202 08:56:07.440469 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-83a929868df1eb9369a15cc29f9b258c30af0fe1e5164f50ea58b911858aa56a WatchSource:0}: Error finding container 83a929868df1eb9369a15cc29f9b258c30af0fe1e5164f50ea58b911858aa56a: Status 404 returned error can't find the container with id 83a929868df1eb9369a15cc29f9b258c30af0fe1e5164f50ea58b911858aa56a Feb 02 08:56:07 crc kubenswrapper[4720]: W0202 08:56:07.454667 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4f368ce3c8da374be0ad3b80d9b87dda7fc0701da290a020bd51a58c850cfc68 WatchSource:0}: Error finding container 4f368ce3c8da374be0ad3b80d9b87dda7fc0701da290a020bd51a58c850cfc68: Status 404 returned error can't find the container with id 4f368ce3c8da374be0ad3b80d9b87dda7fc0701da290a020bd51a58c850cfc68 Feb 02 08:56:07 crc kubenswrapper[4720]: W0202 08:56:07.605623 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.605737 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.694868 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.697227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.697308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.697333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.697381 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.698212 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 02 08:56:07 crc kubenswrapper[4720]: W0202 08:56:07.790051 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:07 crc kubenswrapper[4720]: E0202 08:56:07.790218 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.802404 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.807486 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:08:25.0316303 +0000 UTC Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.890075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79d03661b7622f5b514af7ee33fbcf5558f5a855bcb62932b9f89a348edc8e1d"} Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.891272 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4f368ce3c8da374be0ad3b80d9b87dda7fc0701da290a020bd51a58c850cfc68"} Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.892616 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"83a929868df1eb9369a15cc29f9b258c30af0fe1e5164f50ea58b911858aa56a"} Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.893648 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6bc9f05f10f107dc71a8583a70aa3c85df3a1575b27b859e0864e5c2ba0dbbc7"} Feb 02 08:56:07 crc kubenswrapper[4720]: I0202 08:56:07.894514 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bdd8ad13e56f36ddaa975aa9ad21ce44ffa37e66f69849f0b9be8c9c75edafa6"} Feb 02 08:56:08 crc kubenswrapper[4720]: E0202 08:56:08.214748 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Feb 02 08:56:08 crc kubenswrapper[4720]: W0202 08:56:08.215449 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:08 crc kubenswrapper[4720]: E0202 08:56:08.215555 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:08 crc kubenswrapper[4720]: W0202 08:56:08.378280 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:08 crc kubenswrapper[4720]: E0202 08:56:08.378425 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.499022 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.500962 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.501022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.501042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.501077 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:08 crc kubenswrapper[4720]: E0202 08:56:08.504706 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.802052 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.808208 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.808179 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:04:16.970877857 +0000 UTC Feb 02 08:56:08 crc kubenswrapper[4720]: E0202 08:56:08.810268 4720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.901229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.901309 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.901334 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.904154 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7" exitCode=0 Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.904328 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.904447 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.906089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.906226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.906366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.908001 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5e9d3d18bbbd0d58328af0f1fc337afa0fd3fea106ce51045c9e56554d2297ce" exitCode=0 Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.908243 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.908539 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5e9d3d18bbbd0d58328af0f1fc337afa0fd3fea106ce51045c9e56554d2297ce"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.909019 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.909681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.909734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.909756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.910020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.910122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.910197 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.910903 4720 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="367cf5caacf5f3a9dc8e18f9b2fea0ce8460302d3de051d0b6d88a1ae744ed98" exitCode=0 Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.910988 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"367cf5caacf5f3a9dc8e18f9b2fea0ce8460302d3de051d0b6d88a1ae744ed98"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.911030 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.912169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.912212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.912231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.914254 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.914302 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d"} Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.914109 4720 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d" exitCode=0 Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.915758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.915810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:08 crc kubenswrapper[4720]: I0202 08:56:08.915861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:09 crc kubenswrapper[4720]: W0202 08:56:09.720052 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:09 crc kubenswrapper[4720]: E0202 08:56:09.720189 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.802349 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.808677 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:14:28.5664691 +0000 UTC Feb 02 08:56:09 crc kubenswrapper[4720]: E0202 08:56:09.816357 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.920288 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74b067150a9d2b731edcc6ebb1d8f671cf19260c160a6de8a70ebea2a3702468"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.920527 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.922826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.922872 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.922915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.924911 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.924953 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.924967 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.925070 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.926185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.926208 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.926219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.936117 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.936135 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.937501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.937536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.937548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.941446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.941486 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.941501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.952495 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="85f2d0448fadaa55368f88ea8acfea9382dcf1bebd7a66d92f07ff3b2ad731ee" exitCode=0 Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.952546 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"85f2d0448fadaa55368f88ea8acfea9382dcf1bebd7a66d92f07ff3b2ad731ee"} Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.952745 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.954520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.954554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:09 crc kubenswrapper[4720]: I0202 08:56:09.954573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.105732 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.107306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.107373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.107389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.107421 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:10 crc kubenswrapper[4720]: E0202 08:56:10.108078 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Feb 02 08:56:10 crc kubenswrapper[4720]: E0202 08:56:10.284171 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890622581d1b150 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 08:56:06.787920208 +0000 UTC m=+0.643545794,LastTimestamp:2026-02-02 08:56:06.787920208 +0000 UTC m=+0.643545794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.809553 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:07:01.185738932 +0000 UTC Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.964241 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4"} Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.964334 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688"} Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.964552 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.966227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.966289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.966313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.983039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae4a385838d74ffc1a40e841aa344b86112d9bbffc7199beca45a1144595a976"} Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.983142 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.984077 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae4a385838d74ffc1a40e841aa344b86112d9bbffc7199beca45a1144595a976" exitCode=0 Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.984433 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.984491 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.984506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.984551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.984572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.985754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.985854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.985928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.985952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.986049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.986110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.988064 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.988437 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.991047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.991145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:10 crc kubenswrapper[4720]: I0202 08:56:10.991167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.073802 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.309977 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.398214 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.506872 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.810129 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:16:15.80245724 +0000 UTC Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.993546 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.993620 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.994392 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8d97a9e1fca46271f066b1a3b1f33ac9ec218a485ad0180e5c12a1a7cc39bd7"} Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.994448 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a945dd78ba2788906d001278a61fa05ee9fc85d6d37e45bfd266153e11e178a8"} Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.994470 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a03d98a9e41495839d328423df4ac232c8b592415ceee579989bcb2c8986534e"} Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.994570 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.995200 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.996368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.996456 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.996476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.996593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.996627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.996642 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.997926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.997965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:11 crc kubenswrapper[4720]: I0202 08:56:11.997981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:12 crc kubenswrapper[4720]: I0202 08:56:12.810822 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:03:36.182558576 +0000 UTC Feb 02 08:56:12 crc kubenswrapper[4720]: I0202 08:56:12.971014 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.003370 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d6991c5e902d12f0382aedf08e2817c1ceeeab19b8669a646020de59a9dba890"} Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.003470 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.003492 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9634c37d70994bcebfef2451f58359cac0c8483b90bf481938fa3c52e8d636b8"} Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.003511 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.003533 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.003689 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.005848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.005934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.005960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.005974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.005863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.006040 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.006062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.006141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.006164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.308334 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.310663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.310723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.310740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.310798 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:13 crc kubenswrapper[4720]: I0202 08:56:13.812084 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:58:33.798881949 +0000 UTC Feb 02 08:56:14 crc kubenswrapper[4720]: I0202 08:56:14.007927 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:14 crc kubenswrapper[4720]: I0202 08:56:14.009656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:14 crc kubenswrapper[4720]: I0202 08:56:14.009732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:14 crc kubenswrapper[4720]: I0202 08:56:14.009754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:14 crc kubenswrapper[4720]: I0202 08:56:14.812843 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:37:04.623350573 +0000 UTC Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.061771 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.062196 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.064493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.064576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.064594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.069291 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.487837 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.488637 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.488839 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.490760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.490819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.490837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.564195 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.609335 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.610137 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.612453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.612535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.612564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:15 crc kubenswrapper[4720]: I0202 08:56:15.813603 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:49:52.124044477 +0000 UTC Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.013974 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.013976 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.015609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.015653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.015677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.016774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.016832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.016850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:16 crc kubenswrapper[4720]: I0202 08:56:16.813946 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:44:39.146400248 +0000 UTC Feb 02 08:56:17 crc kubenswrapper[4720]: E0202 08:56:17.009266 4720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 08:56:17 crc kubenswrapper[4720]: I0202 08:56:17.814994 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:41:31.217444964 +0000 UTC Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.029305 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.030150 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.032214 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.032347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.032367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.037275 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:18 crc kubenswrapper[4720]: I0202 08:56:18.815378 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:15:31.645457642 +0000 UTC Feb 02 08:56:19 crc kubenswrapper[4720]: I0202 08:56:19.023484 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:19 crc kubenswrapper[4720]: I0202 08:56:19.025230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:19 crc kubenswrapper[4720]: I0202 08:56:19.025307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:19 crc kubenswrapper[4720]: I0202 08:56:19.025329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:19 crc kubenswrapper[4720]: I0202 08:56:19.815746 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:50:37.967398524 +0000 UTC Feb 02 08:56:20 crc kubenswrapper[4720]: W0202 08:56:20.612082 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 08:56:20 crc kubenswrapper[4720]: I0202 08:56:20.612281 4720 trace.go:236] Trace[1470433523]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 08:56:10.610) (total time: 10001ms): Feb 02 08:56:20 crc kubenswrapper[4720]: Trace[1470433523]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:56:20.612) Feb 02 08:56:20 crc kubenswrapper[4720]: Trace[1470433523]: [10.001910463s] [10.001910463s] END Feb 02 08:56:20 crc kubenswrapper[4720]: E0202 08:56:20.612322 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 08:56:20 crc kubenswrapper[4720]: W0202 08:56:20.793298 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 08:56:20 crc kubenswrapper[4720]: I0202 08:56:20.793437 4720 trace.go:236] Trace[1717799052]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 08:56:10.791) (total time: 10001ms): Feb 02 08:56:20 crc kubenswrapper[4720]: Trace[1717799052]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:56:20.793) Feb 02 08:56:20 crc kubenswrapper[4720]: Trace[1717799052]: [10.001655502s] [10.001655502s] END Feb 02 08:56:20 crc kubenswrapper[4720]: E0202 08:56:20.793470 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 08:56:20 crc kubenswrapper[4720]: I0202 08:56:20.802733 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 08:56:20 crc kubenswrapper[4720]: I0202 08:56:20.816972 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:50:30.881685283 +0000 UTC Feb 02 08:56:20 crc kubenswrapper[4720]: W0202 08:56:20.993075 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 08:56:20 crc kubenswrapper[4720]: I0202 08:56:20.993194 4720 trace.go:236] Trace[1241608213]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 08:56:10.991) (total time: 10001ms): Feb 02 08:56:20 crc kubenswrapper[4720]: Trace[1241608213]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:56:20.993) Feb 02 08:56:20 crc kubenswrapper[4720]: Trace[1241608213]: [10.001755064s] [10.001755064s] END Feb 02 08:56:20 crc kubenswrapper[4720]: E0202 08:56:20.993219 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.030376 4720 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.030473 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.464480 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.464862 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.467036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.467123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.467149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.503922 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.507690 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.507799 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 08:56:21 crc kubenswrapper[4720]: I0202 08:56:21.817763 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:49:37.204079405 +0000 UTC Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.034692 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.038015 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688" exitCode=255 Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.038144 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688"} Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.038231 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.038474 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.039447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.039498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.039514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.040605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.040644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.040661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.041496 4720 scope.go:117] "RemoveContainer" containerID="bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.084704 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.193567 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.193658 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.501963 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:22 crc kubenswrapper[4720]: I0202 08:56:22.818530 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:32:31.94761756 +0000 UTC Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.043192 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.044767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc"} Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.044815 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.044919 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.046337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.046373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.046387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.046429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.046508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.046518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:23 crc kubenswrapper[4720]: I0202 08:56:23.819563 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:41:27.38184183 +0000 UTC Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.047446 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.047580 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.048679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.048773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.048795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.242372 4720 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.515756 4720 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.783692 4720 apiserver.go:52] "Watching apiserver" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.792799 4720 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.793215 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.793801 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.793904 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.793958 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:24 crc kubenswrapper[4720]: E0202 08:56:24.793971 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.793804 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.794070 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:24 crc kubenswrapper[4720]: E0202 08:56:24.794077 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.794162 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:24 crc kubenswrapper[4720]: E0202 08:56:24.794423 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.796841 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.799605 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.800058 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.800078 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.800140 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.799634 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.800346 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.800388 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.801017 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.808133 4720 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.819737 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:54:14.118879923 +0000 UTC Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.839108 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.861236 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.877770 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.890025 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.911176 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.930963 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.945477 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:24 crc kubenswrapper[4720]: I0202 08:56:24.957439 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:25 crc kubenswrapper[4720]: I0202 08:56:25.092386 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 08:56:25 crc kubenswrapper[4720]: I0202 08:56:25.820571 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:15:22.465670618 +0000 UTC Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.397774 4720 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.514934 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.533455 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.551347 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.567233 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.587762 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.607399 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.637298 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.662004 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.821249 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:16:43.750221706 +0000 UTC Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.886210 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.886241 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.886370 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:26 crc kubenswrapper[4720]: E0202 08:56:26.886549 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:26 crc kubenswrapper[4720]: E0202 08:56:26.886691 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:26 crc kubenswrapper[4720]: E0202 08:56:26.886837 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.899805 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.908840 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.918025 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.931555 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.942951 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.954367 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:26 crc kubenswrapper[4720]: I0202 08:56:26.968248 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.063259 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.081194 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.095325 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.107138 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.122623 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.137858 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.152031 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.162369 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.187230 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.189379 4720 trace.go:236] Trace[872866462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 08:56:15.245) (total time: 11943ms): Feb 02 08:56:27 crc kubenswrapper[4720]: Trace[872866462]: ---"Objects listed" error: 11943ms (08:56:27.189) Feb 02 08:56:27 crc kubenswrapper[4720]: Trace[872866462]: [11.943342278s] [11.943342278s] END Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.189416 4720 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.190810 4720 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.193201 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.218378 4720 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.248348 4720 csr.go:261] certificate signing request csr-fbfhs is approved, waiting to be issued Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.264933 4720 csr.go:257] certificate signing request csr-fbfhs is issued Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291812 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291860 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291909 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291929 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291948 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291967 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.291983 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292003 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292040 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292059 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292470 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292486 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292535 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292588 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292533 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292645 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292661 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292658 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292703 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292792 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292855 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292952 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.292991 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293000 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293081 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293104 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293150 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293170 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293196 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293218 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293235 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293254 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293272 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293333 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293445 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293411 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293556 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293571 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293593 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293650 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293681 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293693 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293770 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293789 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293560 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293806 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293837 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293920 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293964 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.293986 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294008 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294064 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294073 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294101 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294155 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294180 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294184 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294200 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294269 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294318 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294500 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294625 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294694 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294737 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294772 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294809 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294843 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294916 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294958 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295045 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295081 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295161 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294275 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295240 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295278 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295353 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295390 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295425 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300211 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300273 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300358 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300405 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300451 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300483 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300511 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300559 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300580 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300603 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300624 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300668 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300722 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294359 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294505 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300909 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300970 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294778 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300978 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294806 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.294967 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295154 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295210 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295188 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295377 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.301083 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.301217 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.301267 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.302184 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303082 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303106 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303129 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303197 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303240 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303264 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303287 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303397 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303422 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303447 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303474 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303497 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295421 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295543 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.295979 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.296128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.296664 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.296805 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.296909 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.297004 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.298553 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.298846 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.299143 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.299230 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.299656 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300177 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300462 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.300795 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.302363 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.302482 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.302637 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303223 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303800 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.304114 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.303614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305083 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305096 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305129 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305171 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305207 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305242 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.305924 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306123 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306383 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306750 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306826 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306849 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306874 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306939 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.306966 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307026 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307068 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307076 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307138 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307198 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307223 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307247 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307288 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307291 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307333 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307469 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307517 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307582 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.309352 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.309662 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307471 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307486 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.307607 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.308346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.308423 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.308688 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.308967 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.309230 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.309404 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.309853 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.309926 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.310234 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.310568 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.310967 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.311325 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.311616 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.311876 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.311966 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312063 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312139 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312261 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312413 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312673 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.312875 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.313126 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.313368 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.314439 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.314436 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.314751 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.314756 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315006 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315093 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315110 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315278 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315400 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315652 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315676 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.315906 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316041 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.316203 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:56:27.816159529 +0000 UTC m=+21.671785105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316302 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316314 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316354 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316356 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316429 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316456 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316480 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316510 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316533 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316573 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316618 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316652 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316671 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316743 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316787 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316808 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.316978 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317023 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317045 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317102 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317124 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317193 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317207 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317227 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317319 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317432 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317485 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317526 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317565 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317601 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317635 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317670 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317704 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317781 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317815 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317852 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317919 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.317969 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.318012 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.318048 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.318084 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319213 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319266 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319338 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319376 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319412 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319449 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319484 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319519 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319556 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319592 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319629 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319665 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.319697 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.320951 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.320990 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.321185 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.321261 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.321949 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.322069 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.322529 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323063 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323163 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323207 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323258 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323338 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323379 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323433 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323483 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323524 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323567 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323620 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323666 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323688 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323708 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323759 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323770 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323808 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323849 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323923 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323973 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324028 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324068 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324118 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324164 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324202 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324354 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324415 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324474 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324537 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324702 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324750 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324861 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325100 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325168 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325466 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326221 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326267 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326289 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326308 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326328 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326359 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326378 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326397 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326419 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326447 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326466 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326485 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326503 4720 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326527 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326545 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326564 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326588 4720 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326606 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326624 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326642 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326665 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326684 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326702 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326720 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326745 4720 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326766 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326786 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326811 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326831 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326854 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326872 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326925 4720 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326943 4720 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326959 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326979 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327005 4720 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327022 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327039 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327058 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327080 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327099 4720 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327119 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327143 4720 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327161 4720 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327179 4720 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327199 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327222 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327240 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327258 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327278 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327329 4720 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327349 4720 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327370 4720 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327391 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327419 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327439 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327456 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327482 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327501 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327521 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327540 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327565 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327590 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327614 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327631 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327654 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327671 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327688 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327713 4720 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327730 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327748 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327766 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327792 4720 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327810 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327827 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327845 4720 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327869 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327908 4720 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327926 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327942 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327969 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327988 4720 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328007 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328033 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328052 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328070 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328089 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328114 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328133 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328149 4720 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328166 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328189 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328206 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328223 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328251 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328268 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328286 4720 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328302 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328325 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328342 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328358 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328376 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328398 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328414 4720 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328431 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328452 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328478 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328498 4720 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328517 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328543 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328563 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328581 4720 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328600 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328624 4720 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328645 4720 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328663 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328680 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328706 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328724 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.328744 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.323976 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324250 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324269 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324518 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324722 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.324919 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.338171 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325084 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325400 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.325960 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326437 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326724 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.326968 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327206 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327734 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.327933 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.337658 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.336323 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.335849 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.339022 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.340342 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.340939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.342785 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.343154 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.343421 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.347711 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.348396 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.348809 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.349011 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.349327 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.355756 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.356670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.356921 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.357031 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.357155 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.357301 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.357660 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.357767 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.358136 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.358196 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.358289 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.358554 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.358715 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.359014 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.359672 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.359772 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.360013 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.360116 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.360151 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.360202 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.360552 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.360582 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.360808 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:27.860732859 +0000 UTC m=+21.716358415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.362114 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.363144 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.367947 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.368112 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.368683 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:27.868587679 +0000 UTC m=+21.724213235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.369341 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.369706 4720 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.369808 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.371475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.372231 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.372265 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.377402 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.377860 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.381808 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.382307 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.383030 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.387119 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.387344 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.387596 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388725 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388752 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388769 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388786 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388815 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388834 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388841 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:27.88881763 +0000 UTC m=+21.744443186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.388934 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:27.888904852 +0000 UTC m=+21.744530488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.389234 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.389259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.389494 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.389501 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.389635 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.390010 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.390581 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.398231 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.398698 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.399358 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.399957 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.401135 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.412806 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.413170 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429286 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429387 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429455 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429549 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429595 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429610 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429625 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429637 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429649 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429661 4720 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429672 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429683 4720 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429694 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429706 4720 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429717 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429728 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429739 4720 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429751 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429762 4720 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429775 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429789 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429801 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429812 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429823 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429836 4720 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429848 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429859 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429871 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429901 4720 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429914 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429927 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429937 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429948 4720 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429959 4720 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429973 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429985 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.429996 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430008 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430019 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430032 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430044 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430056 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430067 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430079 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430091 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430104 4720 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430116 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430128 4720 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430140 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430150 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430162 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430174 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430186 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430197 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430208 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430219 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430232 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430243 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430256 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430268 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430279 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430290 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430300 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430311 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430322 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430332 4720 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430343 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430355 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430365 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430376 4720 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430387 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430397 4720 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430409 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430420 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430434 4720 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430446 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.430458 4720 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.507449 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.513916 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.522448 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.821588 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:33:59.932521186 +0000 UTC Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.835981 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.836241 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:56:28.836197581 +0000 UTC m=+22.691823157 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.937500 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.937566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.937599 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.937626 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.937782 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.937802 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.937817 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.937900 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:28.937859755 +0000 UTC m=+22.793485311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938332 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938379 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:28.938369727 +0000 UTC m=+22.793995283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938435 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938466 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:28.938457949 +0000 UTC m=+22.794083505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938516 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938530 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938540 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: E0202 08:56:27.938568 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:28.938559152 +0000 UTC m=+22.794184708 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.971953 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-t6hpn"] Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.972413 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.973714 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8l7nw"] Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.974189 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.975024 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.976202 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.976375 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.977128 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.977313 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.977345 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.977423 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.978889 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 08:56:27 crc kubenswrapper[4720]: I0202 08:56:27.994109 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.008341 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.017837 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.033134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.034284 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038260 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0342796d-ac1a-4cfa-8666-1c772eab1ed2-rootfs\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038291 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jlc\" (UniqueName: \"kubernetes.io/projected/d161a80a-b09b-456a-a1f7-2fabcf16d4fb-kube-api-access-t9jlc\") pod \"node-resolver-t6hpn\" (UID: \"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\") " pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038322 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5gb\" (UniqueName: \"kubernetes.io/projected/0342796d-ac1a-4cfa-8666-1c772eab1ed2-kube-api-access-sm5gb\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038351 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d161a80a-b09b-456a-a1f7-2fabcf16d4fb-hosts-file\") pod \"node-resolver-t6hpn\" (UID: \"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\") " pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0342796d-ac1a-4cfa-8666-1c772eab1ed2-proxy-tls\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038394 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0342796d-ac1a-4cfa-8666-1c772eab1ed2-mcd-auth-proxy-config\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.038803 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.044108 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.061833 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a"} Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.061933 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"86859dbe763f7824e244c206c7e5f192cb71f5eb1622d8359340257c476f8eaa"} Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.063269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d753f9dbb2c7e7c21c00f301f48f269fa801a9d4bf4874391e2c3e8b5c03cb1a"} Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.065460 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a"} Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.065516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47"} Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.065526 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"abbce9d385f454d2b5a5fce0d89df6f63e1e39e728a8d3babca1270415d8642e"} Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.068691 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.078436 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.093352 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.104267 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.118445 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.130764 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139605 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0342796d-ac1a-4cfa-8666-1c772eab1ed2-rootfs\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jlc\" (UniqueName: \"kubernetes.io/projected/d161a80a-b09b-456a-a1f7-2fabcf16d4fb-kube-api-access-t9jlc\") pod \"node-resolver-t6hpn\" (UID: \"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\") " pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139708 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5gb\" (UniqueName: \"kubernetes.io/projected/0342796d-ac1a-4cfa-8666-1c772eab1ed2-kube-api-access-sm5gb\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139744 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d161a80a-b09b-456a-a1f7-2fabcf16d4fb-hosts-file\") pod \"node-resolver-t6hpn\" (UID: \"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\") " pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139794 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0342796d-ac1a-4cfa-8666-1c772eab1ed2-rootfs\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139811 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0342796d-ac1a-4cfa-8666-1c772eab1ed2-proxy-tls\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.139940 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0342796d-ac1a-4cfa-8666-1c772eab1ed2-mcd-auth-proxy-config\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.140384 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d161a80a-b09b-456a-a1f7-2fabcf16d4fb-hosts-file\") pod \"node-resolver-t6hpn\" (UID: \"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\") " pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.140779 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0342796d-ac1a-4cfa-8666-1c772eab1ed2-mcd-auth-proxy-config\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.144279 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.144640 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0342796d-ac1a-4cfa-8666-1c772eab1ed2-proxy-tls\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.155359 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jlc\" (UniqueName: \"kubernetes.io/projected/d161a80a-b09b-456a-a1f7-2fabcf16d4fb-kube-api-access-t9jlc\") pod \"node-resolver-t6hpn\" (UID: \"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\") " pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.159314 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.161420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5gb\" (UniqueName: \"kubernetes.io/projected/0342796d-ac1a-4cfa-8666-1c772eab1ed2-kube-api-access-sm5gb\") pod \"machine-config-daemon-8l7nw\" (UID: \"0342796d-ac1a-4cfa-8666-1c772eab1ed2\") " pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.171526 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.184941 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.197609 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.210801 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.219375 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.228170 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.267335 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 08:51:27 +0000 UTC, rotation deadline is 2026-10-19 02:58:13.63998071 +0000 UTC Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.267487 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6210h1m45.372497147s for next certificate rotation Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.287981 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t6hpn" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.298231 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:56:28 crc kubenswrapper[4720]: W0202 08:56:28.303376 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd161a80a_b09b_456a_a1f7_2fabcf16d4fb.slice/crio-b4cd160d7dd73ac98712e9983fee8ac5a5a7e964695edeb36821dcd291a3b7f4 WatchSource:0}: Error finding container b4cd160d7dd73ac98712e9983fee8ac5a5a7e964695edeb36821dcd291a3b7f4: Status 404 returned error can't find the container with id b4cd160d7dd73ac98712e9983fee8ac5a5a7e964695edeb36821dcd291a3b7f4 Feb 02 08:56:28 crc kubenswrapper[4720]: W0202 08:56:28.314402 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0342796d_ac1a_4cfa_8666_1c772eab1ed2.slice/crio-491eba311eddbe645f7563dedc05baa605306349540b07f3509a242277baa3d0 WatchSource:0}: Error finding container 491eba311eddbe645f7563dedc05baa605306349540b07f3509a242277baa3d0: Status 404 returned error can't find the container with id 491eba311eddbe645f7563dedc05baa605306349540b07f3509a242277baa3d0 Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.405205 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lw7ql"] Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.405828 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ft6vx"] Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.406060 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.406059 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.407929 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.408217 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.408687 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.410208 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.410423 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.410583 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.415935 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.451641 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.503104 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.530079 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.543835 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-socket-dir-parent\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.543912 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-cni-bin\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.543933 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-system-cni-dir\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.543955 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-cni-binary-copy\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.543976 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-netns\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544006 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-multus-certs\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544023 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-etc-kubernetes\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjctx\" (UniqueName: \"kubernetes.io/projected/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-kube-api-access-rjctx\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544143 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-system-cni-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544191 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-cnibin\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544217 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cnibin\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544234 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-os-release\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544267 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-daemon-config\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbjc\" (UniqueName: \"kubernetes.io/projected/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-kube-api-access-vwbjc\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-cni-multus\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544503 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-k8s-cni-cncf-io\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544614 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-os-release\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-conf-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544717 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-kubelet\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544757 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cni-binary-copy\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544846 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-hostroot\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544867 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.544908 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-cni-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.571247 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.593255 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.608864 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.623865 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.636386 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646368 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-kubelet\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646427 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cni-binary-copy\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646445 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-hostroot\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646481 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646502 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-cni-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646518 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-socket-dir-parent\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646518 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-kubelet\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646533 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-cni-bin\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646588 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-cni-bin\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-system-cni-dir\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-cni-binary-copy\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646662 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-netns\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646689 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-system-cni-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646706 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-cnibin\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646724 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-multus-certs\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646740 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-etc-kubernetes\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646755 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjctx\" (UniqueName: \"kubernetes.io/projected/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-kube-api-access-rjctx\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cnibin\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646800 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-os-release\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646849 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-daemon-config\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646874 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbjc\" (UniqueName: \"kubernetes.io/projected/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-kube-api-access-vwbjc\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646917 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-cni-multus\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646964 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-k8s-cni-cncf-io\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.646987 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-os-release\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647007 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-conf-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647086 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-conf-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-system-cni-dir\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cni-binary-copy\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647473 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-hostroot\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647540 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-cni-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647548 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cnibin\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647573 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-socket-dir-parent\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647579 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-netns\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647625 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-system-cni-dir\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-etc-kubernetes\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647666 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-cnibin\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647693 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-k8s-cni-cncf-io\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647724 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-var-lib-cni-multus\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647627 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-host-run-multus-certs\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.647875 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-cni-binary-copy\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.648105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-os-release\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.648170 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-os-release\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.648412 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-multus-daemon-config\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.648562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.657279 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.670349 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbjc\" (UniqueName: \"kubernetes.io/projected/0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b-kube-api-access-vwbjc\") pod \"multus-additional-cni-plugins-lw7ql\" (UID: \"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\") " pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.673021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjctx\" (UniqueName: \"kubernetes.io/projected/cd3c075e-27ea-4a49-b3bc-0bd6ca79c764-kube-api-access-rjctx\") pod \"multus-ft6vx\" (UID: \"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\") " pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.676195 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.694645 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.707859 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.720093 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.731096 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ft6vx" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.737324 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.743071 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" Feb 02 08:56:28 crc kubenswrapper[4720]: W0202 08:56:28.745242 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd3c075e_27ea_4a49_b3bc_0bd6ca79c764.slice/crio-0a92e8875c6c0e50198c601e161ccfae1bcf43985eada29e484c16938b6fa96e WatchSource:0}: Error finding container 0a92e8875c6c0e50198c601e161ccfae1bcf43985eada29e484c16938b6fa96e: Status 404 returned error can't find the container with id 0a92e8875c6c0e50198c601e161ccfae1bcf43985eada29e484c16938b6fa96e Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.761673 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: W0202 08:56:28.763979 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dc284b2_dbff_4c9d_9fd9_1c1d0381eb8b.slice/crio-2417f193831fb88fd213923abdf05970f0b843df65b989ed03588f82ee0073bf WatchSource:0}: Error finding container 2417f193831fb88fd213923abdf05970f0b843df65b989ed03588f82ee0073bf: Status 404 returned error can't find the container with id 2417f193831fb88fd213923abdf05970f0b843df65b989ed03588f82ee0073bf Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.783326 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.807407 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.821272 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.822296 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrwzp"] Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.824241 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.825661 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:52:01.3343892 +0000 UTC Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.826281 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.826428 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.830087 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.830157 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.830175 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.830280 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.830460 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.838812 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.848669 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.848848 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:56:30.848823811 +0000 UTC m=+24.704449367 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.855115 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.875897 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.886785 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.886942 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.887242 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.887292 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.887658 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.887779 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.888252 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.892527 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.893124 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.894793 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.895522 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.897099 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.897618 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.898282 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.899492 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.900236 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.904078 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.906684 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.907832 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.909097 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.911056 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.911795 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.914855 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.915586 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.916494 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.917280 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.917836 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.918500 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.919793 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.920281 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.920860 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.921747 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.922443 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.923727 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.924371 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.925612 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.926137 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.928346 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.928978 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.929442 4720 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.929547 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.932135 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.932770 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.933358 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.933693 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.935452 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.936293 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.937198 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.937922 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.939129 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.940295 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.941698 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.942371 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.943848 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.944473 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.945952 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.946514 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.946721 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.947950 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.948432 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.948939 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.949831 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950363 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-ovn\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950377 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950398 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950420 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-slash\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950439 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-netns\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950455 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-systemd-units\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950473 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-etc-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950489 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovn-node-metrics-cert\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950506 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcd2\" (UniqueName: \"kubernetes.io/projected/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-kube-api-access-mjcd2\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950524 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-config\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950574 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-script-lib\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-systemd\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950610 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-var-lib-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950625 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950640 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-kubelet\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-node-log\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950689 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-bin\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950703 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-env-overrides\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950718 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-log-socket\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950770 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.950788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-netd\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.950928 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.950971 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:30.950958236 +0000 UTC m=+24.806583792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951199 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951227 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:30.951220073 +0000 UTC m=+24.806845629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951343 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951355 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951365 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951389 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:30.951381427 +0000 UTC m=+24.807006983 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.951428 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951437 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951447 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951455 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:28 crc kubenswrapper[4720]: E0202 08:56:28.951474 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:30.95146822 +0000 UTC m=+24.807093776 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.951977 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.962870 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.978505 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:28 crc kubenswrapper[4720]: I0202 08:56:28.992688 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:28Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.009021 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.023860 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.048031 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051660 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-config\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051690 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-script-lib\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051709 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-systemd\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051737 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-var-lib-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051775 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051792 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-kubelet\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-node-log\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051826 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-bin\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051842 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-env-overrides\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051862 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-log-socket\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051898 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051927 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-netd\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051923 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-systemd\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051978 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-ovn\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.051951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-ovn\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052003 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-kubelet\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052037 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-node-log\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052125 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-bin\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052148 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-netd\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052135 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052169 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-var-lib-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052168 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-log-socket\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052205 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-slash\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052247 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-ovn-kubernetes\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052302 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-slash\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052312 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-netns\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-netns\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-systemd-units\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052533 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-systemd-units\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-etc-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovn-node-metrics-cert\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052595 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-etc-openvswitch\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052604 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjcd2\" (UniqueName: \"kubernetes.io/projected/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-kube-api-access-mjcd2\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052612 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-script-lib\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052639 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-config\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.052952 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-env-overrides\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.058025 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovn-node-metrics-cert\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.063620 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.069520 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerStarted","Data":"9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.069624 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerStarted","Data":"0a92e8875c6c0e50198c601e161ccfae1bcf43985eada29e484c16938b6fa96e"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.071625 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.071693 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.071708 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"491eba311eddbe645f7563dedc05baa605306349540b07f3509a242277baa3d0"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.074732 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerStarted","Data":"905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.074778 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerStarted","Data":"2417f193831fb88fd213923abdf05970f0b843df65b989ed03588f82ee0073bf"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.077230 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjcd2\" (UniqueName: \"kubernetes.io/projected/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-kube-api-access-mjcd2\") pod \"ovnkube-node-mrwzp\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.077320 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t6hpn" event={"ID":"d161a80a-b09b-456a-a1f7-2fabcf16d4fb","Type":"ContainerStarted","Data":"1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004"} Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.077362 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t6hpn" event={"ID":"d161a80a-b09b-456a-a1f7-2fabcf16d4fb","Type":"ContainerStarted","Data":"b4cd160d7dd73ac98712e9983fee8ac5a5a7e964695edeb36821dcd291a3b7f4"} Feb 02 08:56:29 crc kubenswrapper[4720]: E0202 08:56:29.084914 4720 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.084932 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.101531 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.121268 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.136098 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.139908 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.150741 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.172188 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.192714 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.205486 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.235114 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.254144 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.269156 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.282531 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.295948 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.314132 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.333509 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.355713 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:29 crc kubenswrapper[4720]: I0202 08:56:29.826124 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:27:49.242195139 +0000 UTC Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.081633 4720 generic.go:334] "Generic (PLEG): container finished" podID="0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b" containerID="905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376" exitCode=0 Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.081713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerDied","Data":"905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376"} Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.083807 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97" exitCode=0 Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.084613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97"} Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.084656 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"dfd3a63dc5725b10cbe7ae38aaad6d10cf791ad7316c8dd00ce1d6bf348208bf"} Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.108561 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.125256 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.139294 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.158951 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.176150 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.192989 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.212301 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.226456 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.241437 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.263796 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.286107 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.307006 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.329208 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.347421 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.350990 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-n258j"] Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.351505 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.353468 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.354115 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.355032 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.356109 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.373102 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.387490 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.401506 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.417685 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.436980 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.454944 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.468283 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-host\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.468375 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-serviceca\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.468401 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp7mr\" (UniqueName: \"kubernetes.io/projected/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-kube-api-access-fp7mr\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.474665 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.495949 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.510841 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.525092 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.540242 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.556252 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.570123 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-serviceca\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.570198 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp7mr\" (UniqueName: \"kubernetes.io/projected/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-kube-api-access-fp7mr\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.570249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-host\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.570371 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-host\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.571559 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-serviceca\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.578070 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.592019 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp7mr\" (UniqueName: \"kubernetes.io/projected/cdad4980-ba8c-4eae-a74f-04ae0aa67a23-kube-api-access-fp7mr\") pod \"node-ca-n258j\" (UID: \"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\") " pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.603429 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.621951 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.634531 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.653239 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.671071 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.674268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n258j" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.686483 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.710255 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.744977 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.784799 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.827330 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:29:52.72187105 +0000 UTC Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.832302 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.871596 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.875125 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.875524 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:56:34.875451336 +0000 UTC m=+28.731076922 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.886010 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.886075 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.886093 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.886559 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.886667 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.886966 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.907112 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.942839 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.976585 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.976652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.976684 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:30 crc kubenswrapper[4720]: I0202 08:56:30.976719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.976822 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.976848 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.976905 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977117 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:34.976899684 +0000 UTC m=+28.832525250 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977153 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977254 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:34.977213111 +0000 UTC m=+28.832838677 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977294 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977316 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977348 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977364 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977408 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:34.977378576 +0000 UTC m=+28.833004132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:30 crc kubenswrapper[4720]: E0202 08:56:30.977425 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:34.977418837 +0000 UTC m=+28.833044393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.092570 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.092625 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.092636 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.092649 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.092658 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.092667 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.095512 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.099113 4720 generic.go:334] "Generic (PLEG): container finished" podID="0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b" containerID="93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59" exitCode=0 Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.099176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerDied","Data":"93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.102575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n258j" event={"ID":"cdad4980-ba8c-4eae-a74f-04ae0aa67a23","Type":"ContainerStarted","Data":"1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.102624 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n258j" event={"ID":"cdad4980-ba8c-4eae-a74f-04ae0aa67a23","Type":"ContainerStarted","Data":"69e75f2930a81b0573fff8fd7d6c964fab03e00c31b6f33b0e0773edb924a317"} Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.111202 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.127099 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.148353 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.167939 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.186109 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.204960 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.225923 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.264552 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.308864 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.346699 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.391292 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.431338 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.470962 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.507137 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.545591 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.592396 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.632371 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.670297 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.718595 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.745866 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.790198 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.826920 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.827862 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:52:18.804812838 +0000 UTC Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.871750 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.909409 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.946370 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:31 crc kubenswrapper[4720]: I0202 08:56:31.987013 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.026118 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.078803 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.109545 4720 generic.go:334] "Generic (PLEG): container finished" podID="0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b" containerID="844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb" exitCode=0 Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.109652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerDied","Data":"844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb"} Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.124415 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.144737 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.208394 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.227533 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.265010 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.304465 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.345804 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.383440 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.424175 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.465201 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.508641 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.548372 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.586456 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.638234 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.828585 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:31:56.198280781 +0000 UTC Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.887338 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.887384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:32 crc kubenswrapper[4720]: I0202 08:56:32.887574 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:32 crc kubenswrapper[4720]: E0202 08:56:32.887814 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:32 crc kubenswrapper[4720]: E0202 08:56:32.887946 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:32 crc kubenswrapper[4720]: E0202 08:56:32.888121 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.118325 4720 generic.go:334] "Generic (PLEG): container finished" podID="0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b" containerID="a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3" exitCode=0 Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.118450 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerDied","Data":"a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3"} Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.135949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf"} Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.153026 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.181174 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.219069 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.244077 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.258253 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.271953 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.286149 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.302735 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.325269 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.344686 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.359188 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.378363 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.439816 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.470478 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.593404 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.596504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.596560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.596577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.596727 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.606336 4720 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.606647 4720 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.608370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.608433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.608450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.608477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.608496 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: E0202 08:56:33.625996 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.632320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.632399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.632421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.632449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.632474 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: E0202 08:56:33.650076 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.656950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.657194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.657402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.657584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.657771 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: E0202 08:56:33.682212 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.686701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.686756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.686772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.686793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.686808 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: E0202 08:56:33.706389 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.712561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.712618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.712638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.712664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.712700 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: E0202 08:56:33.734924 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:33Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:33 crc kubenswrapper[4720]: E0202 08:56:33.735532 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.738915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.738978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.738999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.739026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.739046 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.828796 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:51:49.005805927 +0000 UTC Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.841950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.842006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.842024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.842053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.842071 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.945202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.945273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.945299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.945335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:33 crc kubenswrapper[4720]: I0202 08:56:33.945359 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:33Z","lastTransitionTime":"2026-02-02T08:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.049420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.049488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.049505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.049532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.049549 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.144513 4720 generic.go:334] "Generic (PLEG): container finished" podID="0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b" containerID="4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3" exitCode=0 Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.144580 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerDied","Data":"4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.153768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.153846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.153869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.153934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.153952 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.172581 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.190773 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.214055 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.234646 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.254668 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.259568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.259626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.259644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.259673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.259692 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.282525 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.305221 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.329399 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.350924 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.364997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.365260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.365305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.365339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.365361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.373549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.399629 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.418202 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.442720 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.470026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.470257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.470269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.470290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.470304 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.472372 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:34Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.574479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.576898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.576913 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.576935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.576951 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.683087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.683126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.683138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.683157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.683169 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.786311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.786345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.786356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.786377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.786389 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.829767 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:28:44.781542146 +0000 UTC Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.886965 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.887124 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:34 crc kubenswrapper[4720]: E0202 08:56:34.887664 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:34 crc kubenswrapper[4720]: E0202 08:56:34.888123 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.888407 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:34 crc kubenswrapper[4720]: E0202 08:56:34.888727 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.891433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.891507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.891526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.891555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.891574 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.950479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:34 crc kubenswrapper[4720]: E0202 08:56:34.950962 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:56:42.950908362 +0000 UTC m=+36.806533928 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.994909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.994971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.994989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.995014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:34 crc kubenswrapper[4720]: I0202 08:56:34.995035 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:34Z","lastTransitionTime":"2026-02-02T08:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.052796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.052955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.053025 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.053100 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053152 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053278 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053176 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053335 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053330 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053389 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053422 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053354 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053312 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:43.053278133 +0000 UTC m=+36.908903729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053740 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:43.053686913 +0000 UTC m=+36.909312499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053782 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:43.053767845 +0000 UTC m=+36.909393441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:35 crc kubenswrapper[4720]: E0202 08:56:35.053806 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:43.053793655 +0000 UTC m=+36.909419241 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.098340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.098429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.098451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.098483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.098503 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.155357 4720 generic.go:334] "Generic (PLEG): container finished" podID="0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b" containerID="06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e" exitCode=0 Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.155435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerDied","Data":"06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.182345 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.201841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.201943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.201974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.202001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.202021 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.209773 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.235233 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.253317 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.267821 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.284812 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.305759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.305806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.305815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.305834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.305845 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.308935 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.324319 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.339302 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.349832 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.363195 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.378194 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.394240 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.411325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.411401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.411426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.411460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.411484 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.419456 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.515824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.515873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.515898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.515921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.515934 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.568627 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.587185 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.601055 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.620652 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.621595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.621646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.621662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.621688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.621706 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.639858 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.657124 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.674426 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.694770 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.716499 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.725972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.726057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.726082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.726116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.726140 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.731658 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.748612 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.763929 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.782418 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.796422 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.817754 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.829101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.829176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.829197 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.829230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.829251 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.830103 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:25:58.031749706 +0000 UTC Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.933764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.933794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.933803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.933819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:35 crc kubenswrapper[4720]: I0202 08:56:35.933829 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:35Z","lastTransitionTime":"2026-02-02T08:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.037377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.037419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.037429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.037447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.037458 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.140670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.140773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.140792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.140819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.140839 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.164045 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" event={"ID":"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b","Type":"ContainerStarted","Data":"3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.181959 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.182346 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.194871 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.217600 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.221269 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.232649 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.244585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.244637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.244653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.244675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.244691 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.247624 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.268165 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.288014 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.313296 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.329273 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.348125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.348206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.348227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.348256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.348275 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.348538 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.373068 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.397469 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.425481 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.448203 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.451650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.451742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.451768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.451801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.451825 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.467469 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.485703 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.504256 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.524854 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.544042 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.554938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.555004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.555022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.555051 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.555068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.567003 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.584871 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.600052 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.613626 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.625029 4720 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.626089 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb/status\": read tcp 38.102.83.177:45380->38.102.83.177:6443: use of closed network connection" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.659827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.659895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.659905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.659922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.659933 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.669822 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.690171 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.704567 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.728134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.748238 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.762431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.762465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.762474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.762492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.762502 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.830692 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:59:13.301712116 +0000 UTC Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.866088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.866144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.866156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.866176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.866187 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.886545 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.886552 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.886554 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:36 crc kubenswrapper[4720]: E0202 08:56:36.886756 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:36 crc kubenswrapper[4720]: E0202 08:56:36.887122 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:36 crc kubenswrapper[4720]: E0202 08:56:36.887179 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.908138 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.922036 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.943036 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.961032 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.969767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.969831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.969859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.969923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.969944 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:36Z","lastTransitionTime":"2026-02-02T08:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:36 crc kubenswrapper[4720]: I0202 08:56:36.981576 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.003524 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.023419 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.041660 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.068087 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.073536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.073624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.073645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.073676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.073701 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.095271 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.127839 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.152396 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.173101 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.177604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.177662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.177681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.177708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.177729 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.191209 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.191256 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.191761 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.230120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.248527 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.267802 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.281240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.281304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.281328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.281360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.281386 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.291875 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.312722 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.330192 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.351378 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.378551 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.384811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.384953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.384975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.385010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.385035 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.397985 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.418869 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.436631 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.458221 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.488475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.488526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.488538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.488558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.488571 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.490034 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.512162 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.540753 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.596171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.596237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.596264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.596299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.596325 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.699622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.699673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.699686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.699707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.699720 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.802422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.802474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.802489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.802509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.802522 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.831029 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:05:44.126765541 +0000 UTC Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.904921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.904987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.905001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.905023 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:37 crc kubenswrapper[4720]: I0202 08:56:37.905067 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:37Z","lastTransitionTime":"2026-02-02T08:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.007791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.007844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.007861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.007906 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.007922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.111448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.111529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.111561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.111592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.111614 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.195017 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.215218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.215278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.215294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.215331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.215346 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.318328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.318375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.318387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.318405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.318418 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.421241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.421298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.421312 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.421333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.421350 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.525505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.525561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.525573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.525594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.525608 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.629233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.629338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.629365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.629402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.629436 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.733405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.733500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.733525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.733567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.733593 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.831764 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:27:46.973536499 +0000 UTC Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.836354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.836429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.836449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.836481 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.836502 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.887460 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:38 crc kubenswrapper[4720]: E0202 08:56:38.887669 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.888459 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:38 crc kubenswrapper[4720]: E0202 08:56:38.888585 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.888780 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:38 crc kubenswrapper[4720]: E0202 08:56:38.888969 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.941555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.941617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.941646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.941685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:38 crc kubenswrapper[4720]: I0202 08:56:38.941787 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:38Z","lastTransitionTime":"2026-02-02T08:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.046307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.046388 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.046410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.046445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.046469 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.153115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.153224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.153243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.153273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.153291 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.201683 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/0.log" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.205642 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291" exitCode=1 Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.205719 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.206846 4720 scope.go:117] "RemoveContainer" containerID="1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.230607 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.250960 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.256977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.257037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.257055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.257084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.257107 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.273680 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.301059 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:38Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.497490 6031 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498083 6031 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498166 6031 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498728 6031 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498732 6031 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 08:56:38.498785 6031 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 08:56:38.498823 6031 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:38.498844 6031 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:38.498957 6031 factory.go:656] Stopping watch factory\\\\nI0202 08:56:38.498989 6031 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:38.499008 6031 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 08:56:38.499022 6031 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 08:56:38.499035 6031 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.319494 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.338634 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.360819 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.362430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.362494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.362513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.362624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.362705 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.380765 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.398971 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.416067 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.438032 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.474175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.474236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.474262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.474290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.474309 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.491450 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.516834 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.543539 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:39Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.578251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.578312 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.578325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.578347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.578362 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.681188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.681262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.681276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.681299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.681315 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.784007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.784141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.784156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.784176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.784191 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.832223 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:39:00.951766352 +0000 UTC Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.888021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.888080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.888092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.888118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.888133 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.992544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.992600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.992616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.992637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:39 crc kubenswrapper[4720]: I0202 08:56:39.992649 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:39Z","lastTransitionTime":"2026-02-02T08:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.095935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.095985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.095996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.096015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.096028 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.199073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.199141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.199158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.199186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.199204 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.212498 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/0.log" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.216533 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.217012 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.240667 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.260273 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.277757 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.297203 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:38Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.497490 6031 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498083 6031 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498166 6031 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498728 6031 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498732 6031 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 08:56:38.498785 6031 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 08:56:38.498823 6031 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:38.498844 6031 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:38.498957 6031 factory.go:656] Stopping watch factory\\\\nI0202 08:56:38.498989 6031 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:38.499008 6031 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 08:56:38.499022 6031 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 08:56:38.499035 6031 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.301855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.301922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.301933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.301952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.301963 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.312618 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.332158 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.354657 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.370470 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.386149 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.406108 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.406179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.406212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.406261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.406301 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.406451 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.430431 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.450245 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.471526 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.487780 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:40Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.509762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.509831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.509850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.509905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.509927 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.612849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.612943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.612965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.612993 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.613013 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.716794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.716856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.716877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.716940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.716959 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.821170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.821247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.821265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.821294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.821317 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.832982 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:31:11.685632113 +0000 UTC Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.886800 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.887518 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:40 crc kubenswrapper[4720]: E0202 08:56:40.887823 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.888077 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:40 crc kubenswrapper[4720]: E0202 08:56:40.894435 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:40 crc kubenswrapper[4720]: E0202 08:56:40.894666 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.925650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.926082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.926131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.926169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:40 crc kubenswrapper[4720]: I0202 08:56:40.926196 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:40Z","lastTransitionTime":"2026-02-02T08:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.029365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.029436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.029494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.029524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.029543 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.134366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.134421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.134433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.134452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.134465 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.227293 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/1.log" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.228526 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/0.log" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.232665 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75" exitCode=1 Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.232705 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.232762 4720 scope.go:117] "RemoveContainer" containerID="1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.233688 4720 scope.go:117] "RemoveContainer" containerID="908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75" Feb 02 08:56:41 crc kubenswrapper[4720]: E0202 08:56:41.233938 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.236735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.236821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.236835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.236854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.236867 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.254067 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.280183 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:38Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.497490 6031 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498083 6031 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498166 6031 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498728 6031 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498732 6031 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 08:56:38.498785 6031 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 08:56:38.498823 6031 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:38.498844 6031 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:38.498957 6031 factory.go:656] Stopping watch factory\\\\nI0202 08:56:38.498989 6031 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:38.499008 6031 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 08:56:38.499022 6031 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 08:56:38.499035 6031 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.298425 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.318176 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.335500 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.339610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.339643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.339652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.339674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.339692 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.350505 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.369405 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.382637 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.400473 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.418555 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.437784 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.440377 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv"] Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.441473 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444565 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444717 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.444791 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.460299 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.478220 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.495447 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.510356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.527830 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.537824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c249917a-a18b-49c1-807a-3c567ea9952a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.538113 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzg9q\" (UniqueName: \"kubernetes.io/projected/c249917a-a18b-49c1-807a-3c567ea9952a-kube-api-access-gzg9q\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.538206 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c249917a-a18b-49c1-807a-3c567ea9952a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.538230 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c249917a-a18b-49c1-807a-3c567ea9952a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.545648 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.548761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.548786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.548795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.548810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.548820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.564953 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.580943 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.597970 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.617225 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.640069 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c249917a-a18b-49c1-807a-3c567ea9952a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.640133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c249917a-a18b-49c1-807a-3c567ea9952a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.640161 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c249917a-a18b-49c1-807a-3c567ea9952a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.640214 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzg9q\" (UniqueName: \"kubernetes.io/projected/c249917a-a18b-49c1-807a-3c567ea9952a-kube-api-access-gzg9q\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.640632 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.641420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c249917a-a18b-49c1-807a-3c567ea9952a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.642181 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c249917a-a18b-49c1-807a-3c567ea9952a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.648442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c249917a-a18b-49c1-807a-3c567ea9952a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.651922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.652067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.652164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.652264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.652361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.667753 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzg9q\" (UniqueName: \"kubernetes.io/projected/c249917a-a18b-49c1-807a-3c567ea9952a-kube-api-access-gzg9q\") pod \"ovnkube-control-plane-749d76644c-zzpkv\" (UID: \"c249917a-a18b-49c1-807a-3c567ea9952a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.669431 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.694063 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1680d83ec40d0d9c917eccbbf58baee0a4a1a974417c805e7a8a6f4b7d390291\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:38Z\\\",\\\"message\\\":\\\" reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.497490 6031 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498083 6031 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498166 6031 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498728 6031 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:38.498732 6031 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 08:56:38.498785 6031 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 08:56:38.498823 6031 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:38.498844 6031 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:38.498957 6031 factory.go:656] Stopping watch factory\\\\nI0202 08:56:38.498989 6031 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:38.499008 6031 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 08:56:38.499022 6031 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 08:56:38.499035 6031 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.711014 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.725048 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.748692 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.755403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.755476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.755495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.755525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.755545 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.763799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.766080 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.785858 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:41Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:41 crc kubenswrapper[4720]: W0202 08:56:41.786311 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc249917a_a18b_49c1_807a_3c567ea9952a.slice/crio-25fb7e951eff466a522168dc9c59df4e5eb58f9a62844aae27ed5d124b585312 WatchSource:0}: Error finding container 25fb7e951eff466a522168dc9c59df4e5eb58f9a62844aae27ed5d124b585312: Status 404 returned error can't find the container with id 25fb7e951eff466a522168dc9c59df4e5eb58f9a62844aae27ed5d124b585312 Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.833952 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:58:08.069472858 +0000 UTC Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.859909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.859972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.859991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.860020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.860036 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.963509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.963558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.963566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.963584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:41 crc kubenswrapper[4720]: I0202 08:56:41.963594 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:41Z","lastTransitionTime":"2026-02-02T08:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.067473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.067507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.067516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.067533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.067568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.170563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.170614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.170648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.170666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.170680 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.239064 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/1.log" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.244043 4720 scope.go:117] "RemoveContainer" containerID="908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75" Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.244351 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.245214 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" event={"ID":"c249917a-a18b-49c1-807a-3c567ea9952a","Type":"ContainerStarted","Data":"9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.245265 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" event={"ID":"c249917a-a18b-49c1-807a-3c567ea9952a","Type":"ContainerStarted","Data":"f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.245280 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" event={"ID":"c249917a-a18b-49c1-807a-3c567ea9952a","Type":"ContainerStarted","Data":"25fb7e951eff466a522168dc9c59df4e5eb58f9a62844aae27ed5d124b585312"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.263639 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.276003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.276056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.276066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.276083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.276099 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.279012 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.297585 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.311448 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.326895 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.340313 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.351337 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.361731 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.372629 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.379488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.379538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.379553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.379579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.379594 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.384796 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.400100 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.416815 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.438593 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.452696 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.465306 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.479588 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.489210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.489268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.489281 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.489300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.489315 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.489388 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.500655 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.514589 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.534261 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.554836 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9qlsb"] Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.555320 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.555463 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.555536 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.570448 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.585056 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.593438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.593524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.593547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.593577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.593596 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.596807 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.614028 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.628767 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.647037 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.654425 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4gh5\" (UniqueName: \"kubernetes.io/projected/37eb17d6-3474-4c16-aa20-cc508c7992fc-kube-api-access-l4gh5\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.654512 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.664779 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.693513 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.696690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.696781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.696809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.696847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.696873 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.706283 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.723155 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.737524 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.751211 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.756014 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4gh5\" (UniqueName: \"kubernetes.io/projected/37eb17d6-3474-4c16-aa20-cc508c7992fc-kube-api-access-l4gh5\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.756096 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.756346 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.756543 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:56:43.256502175 +0000 UTC m=+37.112127801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.773134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.779284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4gh5\" (UniqueName: \"kubernetes.io/projected/37eb17d6-3474-4c16-aa20-cc508c7992fc-kube-api-access-l4gh5\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.795841 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.800430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.800479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.800491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.800512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.800527 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.815047 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.831565 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.834868 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:01:58.625043387 +0000 UTC Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.845608 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.860763 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.879549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.886534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.886614 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.886697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.886687 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.886816 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.886919 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.903368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.903434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.903455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.903485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.903504 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:42Z","lastTransitionTime":"2026-02-02T08:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.903543 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.922200 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.951571 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.958928 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:42 crc kubenswrapper[4720]: E0202 08:56:42.959288 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:56:58.959220658 +0000 UTC m=+52.814846244 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.965741 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.980625 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:42 crc kubenswrapper[4720]: I0202 08:56:42.997830 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:42Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.007005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.007043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.007054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.007069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.007080 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.013207 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:43Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.060157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.060231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.060294 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.060343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060520 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060649 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060668 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060689 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060756 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:59.060734369 +0000 UTC m=+52.916359965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060777 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060792 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.060834 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:59.060819551 +0000 UTC m=+52.916445147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.061293 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.061347 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.061387 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:59.061366404 +0000 UTC m=+52.916991960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.061402 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:56:59.061396124 +0000 UTC m=+52.917021680 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.110033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.110073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.110082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.110100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.110113 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.212919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.212964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.212976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.212992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.213007 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.252419 4720 scope.go:117] "RemoveContainer" containerID="908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75" Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.252776 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.262571 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.262790 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.262910 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:56:44.262862187 +0000 UTC m=+38.118487763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.316405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.316492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.316514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.316543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.316563 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.419697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.419732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.419743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.419760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.419771 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.523314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.523403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.523431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.523466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.523487 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.627052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.627121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.627144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.627177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.627200 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.731302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.731357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.731367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.731387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.731399 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.834658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.834737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.834763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.834795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.834820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.835110 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:08:18.987764876 +0000 UTC Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.886160 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.886371 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.938443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.938517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.938541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.938576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.938602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.940630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.940684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.940710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.940737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.940760 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.963254 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:43Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.969623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.969656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.969669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.969686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.969699 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:43 crc kubenswrapper[4720]: E0202 08:56:43.990659 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:43Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.997245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.997509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.997676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.997834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:43 crc kubenswrapper[4720]: I0202 08:56:43.998034 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:43Z","lastTransitionTime":"2026-02-02T08:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.016223 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:44Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.021687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.021747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.021767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.021794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.021814 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.040841 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:44Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.046810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.046869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.046914 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.046943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.046964 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.063732 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:44Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.063987 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.066190 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.066297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.066322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.066350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.066367 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.168915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.169279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.169362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.169444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.169517 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.272679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.273026 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.273253 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:56:46.27313255 +0000 UTC m=+40.128758146 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.273287 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.273359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.273382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.273407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.273424 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.376844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.376944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.376961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.376989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.377004 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.480373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.480440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.480460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.480486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.480506 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.584135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.584207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.584225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.584253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.584272 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.687350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.687415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.687433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.687465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.687488 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.792285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.792353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.792372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.792399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.792417 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.835956 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:16:46.11454025 +0000 UTC Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.886283 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.886373 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.886300 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.886528 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.886703 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:44 crc kubenswrapper[4720]: E0202 08:56:44.886961 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.896609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.896659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.896671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.896686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:44 crc kubenswrapper[4720]: I0202 08:56:44.896697 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:44Z","lastTransitionTime":"2026-02-02T08:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.000366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.000500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.000566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.000598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.000617 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.104172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.104655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.104792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.105026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.105185 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.209928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.209999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.210017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.210045 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.210065 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.313661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.313756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.313780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.313812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.313836 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.416683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.416766 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.416787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.416815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.416836 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.520277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.520334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.520351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.520377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.520396 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.624630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.624698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.624717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.624740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.624757 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.728150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.728554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.728698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.728837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.728997 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.832514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.832859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.832959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.833041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.833107 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.836802 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:19:24.069952688 +0000 UTC Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.886803 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:45 crc kubenswrapper[4720]: E0202 08:56:45.887206 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.935916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.935971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.935982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.936000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:45 crc kubenswrapper[4720]: I0202 08:56:45.936011 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:45Z","lastTransitionTime":"2026-02-02T08:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.039473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.039524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.039536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.039556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.039568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.143008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.143074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.143087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.143109 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.143124 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.246166 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.246306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.246339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.246374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.246403 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.298098 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:46 crc kubenswrapper[4720]: E0202 08:56:46.298334 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:46 crc kubenswrapper[4720]: E0202 08:56:46.298409 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:56:50.298392182 +0000 UTC m=+44.154017738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.349105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.349192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.349210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.349239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.349259 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.452830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.452949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.452968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.453004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.453030 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.557623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.558543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.558705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.558856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.559032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.663187 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.663282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.663301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.663334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.663355 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.767140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.767195 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.767209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.767233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.767249 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.837452 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:23:31.973404121 +0000 UTC Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.870993 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.871060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.871075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.871098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.871115 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.886857 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.887018 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:46 crc kubenswrapper[4720]: E0202 08:56:46.887068 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.887099 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:46 crc kubenswrapper[4720]: E0202 08:56:46.887285 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:46 crc kubenswrapper[4720]: E0202 08:56:46.887774 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.910458 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:46Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.928289 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:46Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.943537 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:46Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.964822 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:46Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.974006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.974082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.974107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.974147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.974177 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:46Z","lastTransitionTime":"2026-02-02T08:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:46 crc kubenswrapper[4720]: I0202 08:56:46.984991 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:46Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.007108 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.029668 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.047229 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.068420 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.078425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.078498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.078518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.078548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.078567 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.085197 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.104952 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.126065 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.151917 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.182562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.182600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.182612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.182672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.182825 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.184554 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.201904 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.219299 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:47Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.285649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.285686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.285694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.285710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.285720 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.389070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.389175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.389203 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.389234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.389254 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.492972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.493055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.493080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.493123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.493152 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.596466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.596526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.596539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.596562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.596578 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.699961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.700012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.700024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.700044 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.700060 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.803547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.803611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.803629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.803656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.803674 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.838332 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:13:05.205093257 +0000 UTC Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.886795 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:47 crc kubenswrapper[4720]: E0202 08:56:47.887067 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.906762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.907061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.907083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.907106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:47 crc kubenswrapper[4720]: I0202 08:56:47.907124 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:47Z","lastTransitionTime":"2026-02-02T08:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.010229 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.010287 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.010305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.010332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.010350 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.113688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.113741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.113759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.113788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.113807 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.216505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.216575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.216596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.216622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.216637 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.319690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.319769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.319789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.319819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.319838 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.423443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.423526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.423544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.423571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.423588 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.527052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.527110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.527127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.527153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.527171 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.630417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.630477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.630490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.630511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.630526 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.733929 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.733999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.734019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.734047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.734070 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.838183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.838243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.838264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.838291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.838309 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.839062 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:53:18.094490582 +0000 UTC Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.886758 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:48 crc kubenswrapper[4720]: E0202 08:56:48.887108 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.887235 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:48 crc kubenswrapper[4720]: E0202 08:56:48.887510 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.887711 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:48 crc kubenswrapper[4720]: E0202 08:56:48.887928 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.942614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.942707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.942727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.942753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:48 crc kubenswrapper[4720]: I0202 08:56:48.942771 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:48Z","lastTransitionTime":"2026-02-02T08:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.046407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.046475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.046496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.046518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.046532 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.149340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.149393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.149402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.149419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.149429 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.544290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.546081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.546126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.546163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.546197 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.650735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.650817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.650830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.650856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.650875 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.754253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.754347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.754376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.754413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.754438 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.839526 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:44:13.958305673 +0000 UTC Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.858299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.858347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.858356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.858376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.858388 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.885832 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:49 crc kubenswrapper[4720]: E0202 08:56:49.886113 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.961947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.962022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.962035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.962056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:49 crc kubenswrapper[4720]: I0202 08:56:49.962071 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:49Z","lastTransitionTime":"2026-02-02T08:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.065989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.066072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.066110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.066135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.066150 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.170085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.170151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.170173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.170205 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.170229 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.273335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.273411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.273431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.273467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.273489 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.349145 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:50 crc kubenswrapper[4720]: E0202 08:56:50.349446 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:50 crc kubenswrapper[4720]: E0202 08:56:50.349593 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:56:58.349559999 +0000 UTC m=+52.205185585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.378838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.378971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.378992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.379031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.379050 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.482682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.482753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.482774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.483236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.483293 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.586393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.586483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.586497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.586516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.586527 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.689988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.690090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.690120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.690159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.690184 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.794179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.794247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.794264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.794291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.794306 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.839721 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:54:02.634630091 +0000 UTC Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.890627 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:50 crc kubenswrapper[4720]: E0202 08:56:50.890795 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.891149 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.891205 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:50 crc kubenswrapper[4720]: E0202 08:56:50.891382 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:50 crc kubenswrapper[4720]: E0202 08:56:50.891531 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.897176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.897225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.897250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.897279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:50 crc kubenswrapper[4720]: I0202 08:56:50.897304 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:50Z","lastTransitionTime":"2026-02-02T08:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.000988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.001044 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.001055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.001077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.001091 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.105001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.105095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.105110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.105135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.105150 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.208975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.209041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.209054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.209083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.209103 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.314142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.314215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.314242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.314436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.314549 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.417358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.417428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.417445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.417467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.417482 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.521376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.521432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.521443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.521498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.521525 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.624691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.624765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.624786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.624828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.624851 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.728358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.728429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.728457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.728488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.728511 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.832177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.832252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.832271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.832306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.832329 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.840354 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:56:21.037867211 +0000 UTC Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.886609 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:51 crc kubenswrapper[4720]: E0202 08:56:51.886932 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.935463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.935544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.935563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.935594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:51 crc kubenswrapper[4720]: I0202 08:56:51.935614 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:51Z","lastTransitionTime":"2026-02-02T08:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.039516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.039598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.039618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.039649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.039669 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.144043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.144120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.144143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.144185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.144214 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.247223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.247288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.247306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.247338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.247357 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.355518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.356270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.356320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.356351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.356370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.459386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.459451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.459476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.459511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.459533 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.562367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.562437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.562454 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.562481 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.562502 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.665586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.665652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.665671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.665698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.665715 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.768378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.768522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.768546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.768607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.768627 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.840547 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:38:35.417235721 +0000 UTC Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.871620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.871688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.871705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.871733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.871751 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.886304 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.886356 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.886408 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:52 crc kubenswrapper[4720]: E0202 08:56:52.886544 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:52 crc kubenswrapper[4720]: E0202 08:56:52.886637 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:52 crc kubenswrapper[4720]: E0202 08:56:52.886776 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.974741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.974819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.974838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.974866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:52 crc kubenswrapper[4720]: I0202 08:56:52.974941 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:52Z","lastTransitionTime":"2026-02-02T08:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.078619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.078700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.078718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.078746 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.078765 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.182316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.182400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.182425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.182460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.182483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.286651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.286705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.286723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.286775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.286803 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.390076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.390211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.390231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.390260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.390279 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.493543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.493620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.493639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.493673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.493691 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.597327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.597439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.597461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.597488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.597508 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.700541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.700602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.700620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.700646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.700665 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.803925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.804021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.804296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.804341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.804366 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.841477 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:12:07.271896188 +0000 UTC Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.886509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:53 crc kubenswrapper[4720]: E0202 08:56:53.886863 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.908282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.908357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.908374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.908401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:53 crc kubenswrapper[4720]: I0202 08:56:53.908417 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:53Z","lastTransitionTime":"2026-02-02T08:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.011372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.011427 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.011439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.011460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.011502 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.115457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.115574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.115594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.115623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.115643 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.218794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.218849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.218866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.218924 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.218945 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.324270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.324338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.324357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.324384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.324402 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.354775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.354848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.354864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.354928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.354950 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.377917 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:54Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.383677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.383742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.383759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.383786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.383807 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.409467 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:54Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.416698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.416758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.416775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.416801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.416820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.441176 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:54Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.446462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.446550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.446569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.446597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.446619 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.467731 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:54Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.473826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.473911 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.473935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.473971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.473994 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.497131 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:54Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.497407 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.501524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.501585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.501638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.501672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.501694 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.605357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.605439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.605458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.605486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.605506 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.709177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.709605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.709750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.709931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.710064 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.813239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.813294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.813310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.813338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.813361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.841701 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:13:04.644811786 +0000 UTC Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.886781 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.887072 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.887070 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.887215 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.887577 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:54 crc kubenswrapper[4720]: E0202 08:56:54.888020 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.917052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.917118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.917138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.917163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:54 crc kubenswrapper[4720]: I0202 08:56:54.917183 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:54Z","lastTransitionTime":"2026-02-02T08:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.020429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.020859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.021026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.021152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.021400 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.124589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.124661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.124753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.124780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.124797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.227960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.228041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.228067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.228102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.228125 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.331230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.331293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.331311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.331337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.331359 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.434214 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.434270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.434285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.434310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.434327 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.537931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.538003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.538021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.538050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.538068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.642042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.642123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.642143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.642172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.642190 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.745361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.745434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.745452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.745483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.745502 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.842699 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:14:01.418615414 +0000 UTC Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.848068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.848120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.848136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.848160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.848176 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.890613 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:55 crc kubenswrapper[4720]: E0202 08:56:55.890959 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.950782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.950843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.950866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.950940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:55 crc kubenswrapper[4720]: I0202 08:56:55.950965 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:55Z","lastTransitionTime":"2026-02-02T08:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.054245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.054331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.054548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.054578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.054597 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.158315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.158377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.158394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.158423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.158440 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.262064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.262157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.262179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.262216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.262234 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.365034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.365107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.365124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.365148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.365166 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.468566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.468656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.468674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.468703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.468722 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.571324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.571412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.571435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.571465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.571486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.675374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.675450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.675467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.675494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.675516 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.779362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.779453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.779479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.779517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.779542 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.843944 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:43:55.925836606 +0000 UTC Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.882716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.882811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.882833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.882859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.882875 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.886267 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.886351 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:56 crc kubenswrapper[4720]: E0202 08:56:56.886524 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.886639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:56 crc kubenswrapper[4720]: E0202 08:56:56.886786 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:56 crc kubenswrapper[4720]: E0202 08:56:56.887013 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.922460 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:56Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.942701 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:56Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.963730 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:56Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.985612 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:56Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.989978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.990118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.990144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.990177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:56 crc kubenswrapper[4720]: I0202 08:56:56.990200 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:56Z","lastTransitionTime":"2026-02-02T08:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.012640 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.040016 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.060244 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.077723 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.094467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.094517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.094538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.094568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.094592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.101585 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.125149 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.143786 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.162134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.192827 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.198925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.198990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.199007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.199035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.199051 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.216509 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.235701 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.261437 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:57Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.301938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.301991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.302003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.302031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.302044 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.404651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.404699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.404716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.404734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.404744 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.508652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.508778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.508797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.508824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.508852 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.612154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.612200 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.612209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.612225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.612235 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.714956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.715018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.715038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.715065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.715082 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.818754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.818814 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.818825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.818844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.818859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.844650 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:28:38.983807212 +0000 UTC Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.888200 4720 scope.go:117] "RemoveContainer" containerID="908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.888200 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:57 crc kubenswrapper[4720]: E0202 08:56:57.888375 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.923107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.923177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.923196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.923230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:57 crc kubenswrapper[4720]: I0202 08:56:57.923250 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:57Z","lastTransitionTime":"2026-02-02T08:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.027204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.027258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.027272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.027290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.027303 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.131152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.131200 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.131211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.131228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.131240 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.234625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.234671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.234681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.234700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.234713 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.337983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.338059 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.338077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.338104 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.338129 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.357525 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:58 crc kubenswrapper[4720]: E0202 08:56:58.357799 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:58 crc kubenswrapper[4720]: E0202 08:56:58.357871 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:57:14.357846934 +0000 UTC m=+68.213472520 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.442605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.442704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.442729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.442762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.442785 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.547333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.547432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.547457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.547513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.547537 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.584641 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/1.log" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.587548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.588105 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.603839 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.620230 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.643105 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.650015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.650057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.650067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.650083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.650095 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.659648 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.674140 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.693785 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.713916 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.732513 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.750133 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.753026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.753086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.753103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.753133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.753149 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.772804 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.785924 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.805423 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.820066 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.837944 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.845296 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:36:23.882584045 +0000 UTC Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.854927 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.855756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.855851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.855910 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.855951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.855993 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.879462 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:58Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.886370 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.886438 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:58 crc kubenswrapper[4720]: E0202 08:56:58.886520 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.886597 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:58 crc kubenswrapper[4720]: E0202 08:56:58.886739 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:56:58 crc kubenswrapper[4720]: E0202 08:56:58.886828 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.963527 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:56:58 crc kubenswrapper[4720]: E0202 08:56:58.963869 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:57:30.963803559 +0000 UTC m=+84.819429115 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.965568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.965615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.965627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.965648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:58 crc kubenswrapper[4720]: I0202 08:56:58.965663 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:58Z","lastTransitionTime":"2026-02-02T08:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.066026 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.066092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.066120 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.066141 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066276 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066294 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066305 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066325 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066354 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:57:31.066340184 +0000 UTC m=+84.921965730 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066464 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066475 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066537 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066560 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066509 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:57:31.066480447 +0000 UTC m=+84.922106003 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066625 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:57:31.06661416 +0000 UTC m=+84.922239716 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.066647 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:57:31.066638991 +0000 UTC m=+84.922264547 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.068547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.068578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.068592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.068615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.068628 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.174267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.174333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.174350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.174377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.174396 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.277927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.277992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.278011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.278041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.278067 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.381557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.381689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.381712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.381738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.381757 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.485818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.485943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.485966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.485995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.486016 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.590356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.590436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.590461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.590493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.590519 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.594628 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/2.log" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.595828 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/1.log" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.600714 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704" exitCode=1 Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.600762 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.600867 4720 scope.go:117] "RemoveContainer" containerID="908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.602267 4720 scope.go:117] "RemoveContainer" containerID="07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704" Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.602625 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.626428 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.650114 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.667022 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.690758 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.694492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.694683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.694808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.694967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.695089 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.704939 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.722261 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.740209 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.759481 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.787641 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.798584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.798694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.798719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.798747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.798768 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.821626 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://908f71926e93ae468d9d89537be9d7f312cf4aaa9f52e4160661eb375eaf0f75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:40Z\\\",\\\"message\\\":\\\"] Removed *v1.Pod event handler 3\\\\nI0202 08:56:40.190855 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 08:56:40.192012 6173 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192115 6173 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 08:56:40.192145 6173 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:56:40.192150 6173 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:56:40.192166 6173 factory.go:656] Stopping watch factory\\\\nI0202 08:56:40.192180 6173 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:56:40.192114 6173 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192204 6173 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 08:56:40.192234 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 08:56:40.192240 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:56:40.192392 6173 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.840470 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.845588 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:58:10.670191338 +0000 UTC Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.862352 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.881583 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.886047 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:56:59 crc kubenswrapper[4720]: E0202 08:56:59.886272 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.901848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.901920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.901935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.901958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.901973 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:56:59Z","lastTransitionTime":"2026-02-02T08:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.908957 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.928613 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:56:59 crc kubenswrapper[4720]: I0202 08:56:59.954785 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:56:59Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.005163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.005206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.005218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.005235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.005246 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.108839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.108923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.108944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.108970 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.108987 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.212717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.212783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.212801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.212826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.212849 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.317415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.317582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.317612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.317646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.318682 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.422614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.422691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.422709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.422736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.422757 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.525982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.526094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.526107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.526126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.526179 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.606094 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/2.log" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.613165 4720 scope.go:117] "RemoveContainer" containerID="07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704" Feb 02 08:57:00 crc kubenswrapper[4720]: E0202 08:57:00.613507 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.628496 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.628674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.628736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.628763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.628794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.628818 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.648169 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.664330 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.679215 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.694836 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.707864 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.721627 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.731611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.731662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.731672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.731689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.731699 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.736453 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.752353 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.784495 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.796698 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.809727 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.823250 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.834877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.834970 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.834989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.835014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.835034 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.840680 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.846517 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:34:23.480058198 +0000 UTC Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.858393 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.871413 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:00Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.886720 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.886822 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.886721 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:00 crc kubenswrapper[4720]: E0202 08:57:00.887025 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:00 crc kubenswrapper[4720]: E0202 08:57:00.887203 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:00 crc kubenswrapper[4720]: E0202 08:57:00.887398 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.937650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.937709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.937724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.937746 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:00 crc kubenswrapper[4720]: I0202 08:57:00.937762 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:00Z","lastTransitionTime":"2026-02-02T08:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.041094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.041146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.041159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.041179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.041194 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.144626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.144684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.144702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.144724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.144741 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.248733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.248810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.248829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.248857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.248875 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.352539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.352599 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.352655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.352689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.352709 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.408083 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.427648 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.444229 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.456491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.456536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.456549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.456570 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.456583 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.462218 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.483613 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.506712 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.533145 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.551788 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.559833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.559931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.559953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.559977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.559993 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.571489 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.586854 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.606400 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.625649 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.646876 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.661664 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.663262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.663304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.663318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.663339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.663354 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.679655 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.699528 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.715139 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.728377 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:01Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.766206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.766273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.766290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.766316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.766337 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.847674 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:36:59.712393796 +0000 UTC Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.869802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.869871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.869919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.869947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.869968 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.886067 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:01 crc kubenswrapper[4720]: E0202 08:57:01.886260 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.973028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.973141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.973163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.973241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:01 crc kubenswrapper[4720]: I0202 08:57:01.973272 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:01Z","lastTransitionTime":"2026-02-02T08:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.076506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.076578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.076595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.076623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.076641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.180705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.180814 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.180832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.180856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.180873 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.284269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.284314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.284331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.284354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.284371 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.387391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.387463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.387488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.387536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.387562 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.492912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.493019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.493075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.493131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.493149 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.596624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.596697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.596715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.596739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.596755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.700351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.700418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.700436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.700461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.700482 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.804780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.804927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.804961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.804991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.805014 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.848508 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:19:11.819727412 +0000 UTC Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.886535 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:02 crc kubenswrapper[4720]: E0202 08:57:02.886782 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.887061 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:02 crc kubenswrapper[4720]: E0202 08:57:02.887276 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.888090 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:02 crc kubenswrapper[4720]: E0202 08:57:02.888216 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.908364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.908405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.908422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.908453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:02 crc kubenswrapper[4720]: I0202 08:57:02.908478 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:02Z","lastTransitionTime":"2026-02-02T08:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.012398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.012463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.012488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.012514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.012537 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.115720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.115786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.115805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.115834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.115857 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.226152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.226222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.226244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.226273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.226295 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.329398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.329473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.329493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.329518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.329537 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.433355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.433430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.433452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.433486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.433510 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.537381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.537443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.537462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.537492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.537509 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.640117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.640541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.640755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.641035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.641224 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.745394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.745466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.745484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.745514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.745532 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.848633 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:04:30.817204644 +0000 UTC Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.848773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.848830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.848856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.848935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.848967 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.886915 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:03 crc kubenswrapper[4720]: E0202 08:57:03.887150 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.952002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.952349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.952512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.952650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:03 crc kubenswrapper[4720]: I0202 08:57:03.952786 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:03Z","lastTransitionTime":"2026-02-02T08:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.056610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.057007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.057261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.057468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.057659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.160720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.160796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.160821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.160857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.160940 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.264173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.264244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.264266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.264296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.264320 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.367317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.367393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.367409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.367436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.367449 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.469646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.469717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.469741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.469772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.469795 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.557630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.557697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.557713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.557742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.557759 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.574066 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:04Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.578003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.578049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.578065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.578089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.578107 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.591060 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:04Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.595315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.595385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.595404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.595435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.595454 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.609612 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:04Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.614420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.614484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.614505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.614536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.614564 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.630098 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:04Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.634747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.634817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.634846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.634875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.634928 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.651741 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:04Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.652029 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.653710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.653776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.653800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.653832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.653855 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.757259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.757326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.757346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.757371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.757389 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.849046 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:19:46.36246268 +0000 UTC Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.861565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.861896 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.861981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.862073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.862153 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.887223 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.887645 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.887283 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.887913 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.887223 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:04 crc kubenswrapper[4720]: E0202 08:57:04.888785 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.966073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.966149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.966165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.966186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:04 crc kubenswrapper[4720]: I0202 08:57:04.966198 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:04Z","lastTransitionTime":"2026-02-02T08:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.069846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.070403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.070573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.070732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.070871 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.174029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.174099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.174126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.174160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.174183 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.278112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.278180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.278206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.278243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.278262 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.382096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.382163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.382176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.382196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.382209 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.485662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.485720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.485744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.485778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.485796 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.589014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.589057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.589067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.589086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.589098 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.692508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.692551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.692560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.692578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.692591 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.795976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.796027 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.796037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.796060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.796073 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.850471 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:08:41.293975656 +0000 UTC Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.886185 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:05 crc kubenswrapper[4720]: E0202 08:57:05.886396 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.898818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.898863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.898873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.898907 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:05 crc kubenswrapper[4720]: I0202 08:57:05.898918 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:05Z","lastTransitionTime":"2026-02-02T08:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.001463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.001498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.001506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.001524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.001535 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.104021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.104068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.104259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.104277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.104287 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.207175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.207224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.207235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.207254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.207266 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.310322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.310380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.310391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.310412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.310426 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.413503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.413538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.413547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.413562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.413572 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.516543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.516603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.516620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.516643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.516662 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.619709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.619771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.619792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.619818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.619838 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.724115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.724188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.724205 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.724228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.724244 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.830585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.830653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.830667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.830689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.830708 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.850694 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:44:30.533541643 +0000 UTC Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.887171 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:06 crc kubenswrapper[4720]: E0202 08:57:06.887360 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.888322 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:06 crc kubenswrapper[4720]: E0202 08:57:06.888576 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.888791 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:06 crc kubenswrapper[4720]: E0202 08:57:06.889055 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.905538 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:06Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.922522 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:06Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.933791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.933844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.933862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.933907 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.933923 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:06Z","lastTransitionTime":"2026-02-02T08:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.938487 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:06Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.957033 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:06Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:06 crc kubenswrapper[4720]: I0202 08:57:06.986308 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:06Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.008505 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.025134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.036544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.036614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.036626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.036664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.036678 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.043335 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.061049 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.079456 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.096599 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.113622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.131979 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.139868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.139955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.139974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.140004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.140023 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.150145 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.170249 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.186539 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.206552 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:07Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.242970 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.243026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.243047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.243076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.243094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.347095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.347352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.347366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.347389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.347403 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.450844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.450914 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.450928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.450948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.450962 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.554005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.554072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.554089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.554112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.554125 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.656734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.656768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.656777 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.656795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.656807 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.759652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.759692 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.759706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.759725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.759739 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.851374 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:21:00.277770188 +0000 UTC Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.863021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.863087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.863101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.863129 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.863146 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.886551 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:07 crc kubenswrapper[4720]: E0202 08:57:07.886757 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.966371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.966462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.966483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.966510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:07 crc kubenswrapper[4720]: I0202 08:57:07.966531 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:07Z","lastTransitionTime":"2026-02-02T08:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.070073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.070168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.070233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.071063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.071199 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.175418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.175495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.175517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.175544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.175567 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.281170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.281233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.281244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.281265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.281279 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.384938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.385009 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.385034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.385069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.385094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.489506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.489600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.489627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.489662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.489690 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.593238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.593315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.593336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.593364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.593389 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.696934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.697009 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.697027 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.697059 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.697076 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.800755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.800832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.800847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.801399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.801441 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.852473 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:23:07.6592934 +0000 UTC Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.886285 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:08 crc kubenswrapper[4720]: E0202 08:57:08.886468 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.886726 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:08 crc kubenswrapper[4720]: E0202 08:57:08.886792 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.886991 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:08 crc kubenswrapper[4720]: E0202 08:57:08.887071 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.905130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.905178 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.905194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.905215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:08 crc kubenswrapper[4720]: I0202 08:57:08.905229 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:08Z","lastTransitionTime":"2026-02-02T08:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.008641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.008689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.008701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.008720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.008734 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.111857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.111944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.111960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.111985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.112003 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.214569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.214639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.214659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.214690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.214710 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.320101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.320256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.320282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.320352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.320377 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.425428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.425500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.425518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.425547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.425568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.529213 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.529290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.529324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.529367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.529389 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.632840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.632940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.632950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.632968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.632978 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.736623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.736684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.736697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.736717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.736731 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.840082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.840131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.840144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.840163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.840177 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.853611 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:37:12.775529969 +0000 UTC Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.886360 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:09 crc kubenswrapper[4720]: E0202 08:57:09.886595 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.942647 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.942713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.942732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.942760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:09 crc kubenswrapper[4720]: I0202 08:57:09.942782 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:09Z","lastTransitionTime":"2026-02-02T08:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.045751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.045867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.045919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.045954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.046014 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.148470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.148526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.148542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.148561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.148578 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.252196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.252250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.252260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.252279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.252291 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.355690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.355748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.355762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.355784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.355798 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.462480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.462553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.462569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.462591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.462607 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.565033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.565078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.565088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.565103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.565115 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.667559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.667645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.667674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.667706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.667729 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.770865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.770942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.770955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.770981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.770998 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.854660 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:09:17.352047297 +0000 UTC Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.873859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.873919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.873932 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.873952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.873965 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.888904 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:10 crc kubenswrapper[4720]: E0202 08:57:10.889036 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.889252 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:10 crc kubenswrapper[4720]: E0202 08:57:10.889303 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.889406 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:10 crc kubenswrapper[4720]: E0202 08:57:10.889450 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.976930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.976990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.977000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.977017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:10 crc kubenswrapper[4720]: I0202 08:57:10.977031 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:10Z","lastTransitionTime":"2026-02-02T08:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.097282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.097328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.097337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.097359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.097370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.200720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.200771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.200782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.200805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.200820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.303667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.303713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.303726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.303748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.303763 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.407953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.408003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.408018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.408037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.408053 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.511421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.511518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.511532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.511548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.511558 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.614745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.614822 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.614831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.614849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.614859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.717912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.718265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.718336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.718424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.718503 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.821103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.821191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.821208 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.821236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.821255 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.855772 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:33:12.277073296 +0000 UTC Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.886179 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:11 crc kubenswrapper[4720]: E0202 08:57:11.886820 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.924590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.924650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.924659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.924675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:11 crc kubenswrapper[4720]: I0202 08:57:11.924685 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:11Z","lastTransitionTime":"2026-02-02T08:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.027150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.027211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.027226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.027255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.027271 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.130831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.131287 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.131445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.131562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.131643 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.234827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.235343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.235498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.235679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.235830 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.339515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.340042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.340238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.340413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.340585 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.443978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.444038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.444052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.444076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.444091 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.547180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.547265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.547289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.547322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.547344 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.649830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.649860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.649869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.649898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.649909 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.753250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.753299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.753311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.753332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.753344 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.856234 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:28:10.671822855 +0000 UTC Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.856704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.857666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.857818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.858013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.858192 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.886159 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.886281 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.886306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:12 crc kubenswrapper[4720]: E0202 08:57:12.886706 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:12 crc kubenswrapper[4720]: E0202 08:57:12.886769 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:12 crc kubenswrapper[4720]: E0202 08:57:12.886968 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.961728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.961775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.961791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.961812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:12 crc kubenswrapper[4720]: I0202 08:57:12.961826 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:12Z","lastTransitionTime":"2026-02-02T08:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.064188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.064244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.064257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.064277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.064289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.167438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.167509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.167521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.167537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.167547 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.271574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.272033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.272205 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.272407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.272584 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.376132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.376199 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.376213 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.376238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.376252 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.479766 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.479830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.479844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.479867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.479907 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.583155 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.583218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.583233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.583253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.583267 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.686230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.686307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.686326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.686357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.686375 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.789192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.789241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.789251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.789273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.789286 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.857655 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:26:38.613458676 +0000 UTC Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.886580 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:13 crc kubenswrapper[4720]: E0202 08:57:13.886778 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.886954 4720 scope.go:117] "RemoveContainer" containerID="07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704" Feb 02 08:57:13 crc kubenswrapper[4720]: E0202 08:57:13.887250 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.892283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.892324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.892334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.892353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.892366 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.995623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.995674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.995696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.995719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:13 crc kubenswrapper[4720]: I0202 08:57:13.995738 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:13Z","lastTransitionTime":"2026-02-02T08:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.098866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.098936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.098956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.098974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.098985 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.201938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.202251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.202393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.202547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.202734 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.306916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.306969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.306988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.307013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.307034 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.363279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.363501 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.363643 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:57:46.363608509 +0000 UTC m=+100.219234285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.410341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.410389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.410403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.410423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.410438 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.514554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.515058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.515161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.515274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.515361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.618324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.618364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.618372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.618388 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.618397 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.710680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.710751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.710762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.710784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.710796 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.732520 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:14Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.738325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.738554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.738794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.739000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.739159 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.755664 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:14Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.760265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.760458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.760622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.760807 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.760976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.776313 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:14Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.780658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.780920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.781082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.781215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.781340 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.795096 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:14Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.799075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.799128 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.799147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.799169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.799188 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.813240 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:14Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.813362 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.815336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.815514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.815648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.815809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.815986 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.858199 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:40:22.327984641 +0000 UTC Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.888280 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.888431 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.888660 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.888728 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.889024 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:14 crc kubenswrapper[4720]: E0202 08:57:14.889195 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.918306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.918336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.918345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.918362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:14 crc kubenswrapper[4720]: I0202 08:57:14.918371 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:14Z","lastTransitionTime":"2026-02-02T08:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.020800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.020841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.020849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.020863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.020873 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.123871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.123947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.123956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.123976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.123992 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.226688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.226738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.226751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.226771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.226817 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.329926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.329976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.329986 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.330008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.330023 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.433681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.433725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.433738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.433753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.433764 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.536673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.536732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.536743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.536767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.536779 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.639677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.639748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.639764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.639785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.639800 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.669219 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/0.log" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.669284 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd3c075e-27ea-4a49-b3bc-0bd6ca79c764" containerID="9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b" exitCode=1 Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.669325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerDied","Data":"9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.669818 4720 scope.go:117] "RemoveContainer" containerID="9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.686628 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.699967 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.716242 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.730896 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.744151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.744918 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.744957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.744968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.744990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.745003 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.757295 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.770847 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.788623 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.803682 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.819763 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.838177 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.852032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.852080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.852090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.852107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.852119 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.853789 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.859946 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:05:11.847355427 +0000 UTC Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.869211 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.886344 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:15 crc kubenswrapper[4720]: E0202 08:57:15.886503 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.889229 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.902477 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.915957 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.931861 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:15Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.955056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.955110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.955127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.955153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:15 crc kubenswrapper[4720]: I0202 08:57:15.955165 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:15Z","lastTransitionTime":"2026-02-02T08:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.057719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.057756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.057767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.057785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.057799 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.160538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.160576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.160585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.160600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.160610 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.263534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.263584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.263597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.263617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.263628 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.367029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.367077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.367093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.367114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.367129 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.469677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.469731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.469744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.469764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.469778 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.572863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.572935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.572949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.572974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.572990 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.675219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.675270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.675289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.675314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.675326 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.676149 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/0.log" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.676228 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerStarted","Data":"2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.694982 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.712032 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.736571 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.762243 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.777627 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.779182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.779303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.779324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.779353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.779384 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.795605 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.811762 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.829442 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.844490 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.860386 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:25:25.632624145 +0000 UTC Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.860636 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.875349 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.882439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.882480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.882493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.882517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.882533 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.886082 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.886135 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:16 crc kubenswrapper[4720]: E0202 08:57:16.886207 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:16 crc kubenswrapper[4720]: E0202 08:57:16.886328 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.886500 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:16 crc kubenswrapper[4720]: E0202 08:57:16.886595 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.890683 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.906829 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.927771 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.942840 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.957347 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.969100 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.982539 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.984903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.984949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.984963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.984982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.984996 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:16Z","lastTransitionTime":"2026-02-02T08:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:16 crc kubenswrapper[4720]: I0202 08:57:16.997064 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:16Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.015192 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.032812 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.048975 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.063027 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.077297 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.088455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.088504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.088514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.088534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.088545 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.093719 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.108190 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.124771 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.136022 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.148573 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.169149 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.183045 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.191645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.191698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.191712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.191733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.191752 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.200362 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.219997 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.234015 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:17Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.293958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.293996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.294007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.294024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.294037 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.396359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.396401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.396413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.396433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.396446 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.500133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.500175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.500318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.500341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.500356 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.603280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.603318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.603329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.603344 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.603354 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.707429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.707482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.707491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.707511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.707524 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.810734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.811139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.811257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.811365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.811452 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.861361 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:57:32.315267413 +0000 UTC Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.887262 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:17 crc kubenswrapper[4720]: E0202 08:57:17.887465 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.914481 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.914539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.914553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.914575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:17 crc kubenswrapper[4720]: I0202 08:57:17.914588 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:17Z","lastTransitionTime":"2026-02-02T08:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.017731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.017771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.017780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.017797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.017809 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.120457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.120500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.120514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.120541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.120559 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.223926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.223990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.224002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.224028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.224042 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.327384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.327434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.327443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.327462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.327472 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.430076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.430134 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.430146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.430167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.430180 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.533463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.533513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.533526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.533546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.533558 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.636912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.637224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.637372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.637480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.637563 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.740681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.740729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.740741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.740760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.740773 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.844394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.844446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.844458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.844484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.844500 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.861872 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:37:58.384429472 +0000 UTC Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.886310 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:18 crc kubenswrapper[4720]: E0202 08:57:18.886498 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.886796 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:18 crc kubenswrapper[4720]: E0202 08:57:18.886911 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.887068 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:18 crc kubenswrapper[4720]: E0202 08:57:18.887146 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.947223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.947268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.947280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.947296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:18 crc kubenswrapper[4720]: I0202 08:57:18.947308 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:18Z","lastTransitionTime":"2026-02-02T08:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.049422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.049485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.049499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.049516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.049545 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.152961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.153005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.153021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.153045 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.153064 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.256185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.256215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.256225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.256240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.256249 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.359683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.359725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.359737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.359755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.359768 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.462687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.462744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.462756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.462776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.462790 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.565020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.565060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.565070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.565087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.565100 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.668233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.668269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.668278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.668295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.668305 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.771634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.771699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.771713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.771740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.771758 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.863174 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:32:22.49794536 +0000 UTC Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.874138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.874181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.874192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.874209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.874221 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.886440 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:19 crc kubenswrapper[4720]: E0202 08:57:19.886549 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.977840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.977931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.977944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.977968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:19 crc kubenswrapper[4720]: I0202 08:57:19.977981 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:19Z","lastTransitionTime":"2026-02-02T08:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.080643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.080709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.080718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.080733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.080744 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.184148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.184202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.184215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.184237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.184250 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.289263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.289309 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.289318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.289339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.289351 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.391546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.391605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.391614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.391630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.391641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.493806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.493852 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.493861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.493898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.493910 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.597221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.597291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.597308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.597330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.597344 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.700796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.700836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.700844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.700862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.700873 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.804008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.804046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.804054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.804072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.804083 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.863294 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:11:51.347496809 +0000 UTC Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.886768 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.886800 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.886830 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:20 crc kubenswrapper[4720]: E0202 08:57:20.886970 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:20 crc kubenswrapper[4720]: E0202 08:57:20.887258 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:20 crc kubenswrapper[4720]: E0202 08:57:20.887397 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.906626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.906678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.906703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.906722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:20 crc kubenswrapper[4720]: I0202 08:57:20.906734 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:20Z","lastTransitionTime":"2026-02-02T08:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.013735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.013817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.013843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.013870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.013982 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.117120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.117186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.117201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.117225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.117239 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.220479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.220552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.220577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.220611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.220631 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.324436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.324502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.324518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.324544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.324555 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.428032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.428080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.428092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.428111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.428125 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.531432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.531489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.531575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.531606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.531621 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.634470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.634548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.634576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.634612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.634639 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.738070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.738135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.738156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.738188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.738210 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.841972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.842016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.842033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.842054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.842071 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.863603 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:16:51.363688805 +0000 UTC Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.886253 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:21 crc kubenswrapper[4720]: E0202 08:57:21.886448 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.944701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.944748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.944763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.944785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:21 crc kubenswrapper[4720]: I0202 08:57:21.944803 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:21Z","lastTransitionTime":"2026-02-02T08:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.047922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.048047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.048075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.048098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.048115 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.151612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.151681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.151700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.151729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.151749 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.254789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.254869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.254928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.254964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.254989 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.359243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.359313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.359339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.359371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.359394 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.462789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.462876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.462942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.462978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.463006 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.566716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.566804 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.566830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.566866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.566926 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.669961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.670024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.670048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.670078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.670100 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.773574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.773654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.773673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.773703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.773722 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.864513 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:44:18.297738679 +0000 UTC Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.877348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.877408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.877419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.877437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.877449 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.886024 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.886122 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:22 crc kubenswrapper[4720]: E0202 08:57:22.886179 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:22 crc kubenswrapper[4720]: E0202 08:57:22.886327 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.886417 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:22 crc kubenswrapper[4720]: E0202 08:57:22.886473 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.980426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.980495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.980511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.980538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:22 crc kubenswrapper[4720]: I0202 08:57:22.980559 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:22Z","lastTransitionTime":"2026-02-02T08:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.084506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.084587 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.084600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.084640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.084694 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.187585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.187623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.187634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.187652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.187669 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.291995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.292074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.292089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.292112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.292136 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.396110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.396154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.396166 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.396185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.396200 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.500048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.500270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.500297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.500369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.500389 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.603709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.603800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.603825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.603860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.603922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.706518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.706601 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.706624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.706653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.706675 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.810744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.810819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.810838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.810867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.810922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.865662 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:01:02.700118008 +0000 UTC Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.886424 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:23 crc kubenswrapper[4720]: E0202 08:57:23.886658 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.914060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.914121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.914139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.914166 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:23 crc kubenswrapper[4720]: I0202 08:57:23.914191 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:23Z","lastTransitionTime":"2026-02-02T08:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.021693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.021770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.021794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.021831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.021861 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.124603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.124671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.124687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.124716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.124735 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.227867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.227987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.228012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.228047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.228075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.331478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.331537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.331554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.331578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.331596 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.435361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.435428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.435446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.435474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.435492 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.538725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.538807 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.538833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.538867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.538932 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.642399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.642510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.642530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.642554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.642572 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.746000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.746112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.746138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.746183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.746208 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.831603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.831682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.831704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.831731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.831750 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.854569 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:24Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.860686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.860753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.860771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.860799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.860817 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.865971 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:18:06.174519557 +0000 UTC Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.881509 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:24Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.886530 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.886618 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.886710 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.886952 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.887065 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.887215 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.887410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.887460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.887479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.887507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.887528 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.914358 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:24Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.921674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.921784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.921814 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.921925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.921959 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.945213 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:24Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.951602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.951677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.951694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.951721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.951744 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.975388 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:24Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:24 crc kubenswrapper[4720]: E0202 08:57:24.975641 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.978332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.978387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.978415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.978449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:24 crc kubenswrapper[4720]: I0202 08:57:24.978475 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:24Z","lastTransitionTime":"2026-02-02T08:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.081644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.081708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.081725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.081750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.081769 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.185521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.185580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.185598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.185624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.185641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.290304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.290377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.290749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.290791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.290814 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.394164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.394215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.394228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.394247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.394259 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.497701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.497748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.497759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.497774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.497782 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.600805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.600859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.600870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.600907 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.600920 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.704101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.704165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.704187 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.704220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.704244 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.808038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.808106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.808124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.808150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.808168 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.867017 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:34:40.580464562 +0000 UTC Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.886925 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:25 crc kubenswrapper[4720]: E0202 08:57:25.887164 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.911911 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.911968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.911985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.912011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:25 crc kubenswrapper[4720]: I0202 08:57:25.912030 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:25Z","lastTransitionTime":"2026-02-02T08:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.015702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.015766 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.015784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.015809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.015830 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.118972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.119070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.119096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.119134 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.119169 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.222131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.222186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.222199 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.222218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.222233 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.325836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.327063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.327172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.327212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.327235 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.430608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.430677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.430696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.430725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.430747 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.535353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.535408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.535419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.535447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.535462 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.639141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.639212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.639231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.639259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.639281 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.742702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.742779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.742796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.742826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.742850 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.846328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.846410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.846431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.846463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.846484 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.868001 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:07:45.712970146 +0000 UTC Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.886718 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.886788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.886909 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:26 crc kubenswrapper[4720]: E0202 08:57:26.887220 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:26 crc kubenswrapper[4720]: E0202 08:57:26.887417 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:26 crc kubenswrapper[4720]: E0202 08:57:26.887623 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.908545 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:26Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.925082 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:26Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.949242 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:26Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.949491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.949528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.949545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.949568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.949588 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:26Z","lastTransitionTime":"2026-02-02T08:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.970237 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:26Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:26 crc kubenswrapper[4720]: I0202 08:57:26.989285 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:26Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.009016 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.025930 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.046666 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.053935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.054013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.054032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.054058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.054077 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.069935 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.093040 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.117074 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.158194 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.159755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.160076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.160224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.160382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.160521 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.174538 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.194647 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.217228 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.239575 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.258524 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:27Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.264253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.264303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.264320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.264344 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.264361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.367917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.367978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.367994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.368021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.368038 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.471500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.472076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.472236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.472391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.472527 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.575761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.575817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.575836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.575861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.575912 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.678643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.678706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.678722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.678748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.678770 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.782715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.782812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.782833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.782867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.782922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.868394 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:24:09.281878483 +0000 UTC Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.885766 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.885825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.885843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.885868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.885923 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.886292 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:27 crc kubenswrapper[4720]: E0202 08:57:27.886613 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.989387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.989470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.989497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.989534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:27 crc kubenswrapper[4720]: I0202 08:57:27.989557 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:27Z","lastTransitionTime":"2026-02-02T08:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.093540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.093610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.093627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.093655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.093674 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.196858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.196932 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.196943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.196964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.196976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.299739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.299824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.299845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.299875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.299929 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.403622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.403678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.403695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.403720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.403739 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.506835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.507306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.507407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.507518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.507625 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.610639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.610682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.610693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.610710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.610725 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.713660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.713705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.713715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.713734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.713747 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.817654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.817808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.817835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.817870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.817930 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.868683 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:53:28.708579669 +0000 UTC Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.887261 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.887463 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.887773 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:28 crc kubenswrapper[4720]: E0202 08:57:28.888083 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:28 crc kubenswrapper[4720]: E0202 08:57:28.888231 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:28 crc kubenswrapper[4720]: E0202 08:57:28.888571 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.890234 4720 scope.go:117] "RemoveContainer" containerID="07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.921191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.921273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.921301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.921338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:28 crc kubenswrapper[4720]: I0202 08:57:28.921365 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:28Z","lastTransitionTime":"2026-02-02T08:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.025168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.025237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.025257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.025285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.025305 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.129486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.129722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.129751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.129780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.129801 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.233413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.233471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.233488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.233513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.233530 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.337375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.337445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.337464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.337493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.337515 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.440106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.440156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.440169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.440190 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.440204 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.542637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.542699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.542713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.542732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.542745 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.645632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.645683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.645694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.645716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.645732 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.731780 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/2.log" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.735824 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.736533 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.748636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.748684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.748697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.748716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.748730 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.755587 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.770714 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.787622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.807341 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.821738 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.844533 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.852620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.852685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.852698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.852718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.852733 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.866959 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.868812 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:18:04.235219356 +0000 UTC Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.886509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:29 crc kubenswrapper[4720]: E0202 08:57:29.886717 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.897632 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.923579 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.944003 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.955787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.955836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.955851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.955870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.955905 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:29Z","lastTransitionTime":"2026-02-02T08:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.960794 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.976172 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:29 crc kubenswrapper[4720]: I0202 08:57:29.992447 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:29Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.008237 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.026458 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.041478 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.057071 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.059417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.059462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.059473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.059490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.059523 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.163037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.163117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.163147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.163168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.163180 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.266762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.266837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.266855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.266918 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.266939 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.370067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.370119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.370128 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.370150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.370163 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.474162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.474215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.474248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.474269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.474281 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.579341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.579405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.579423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.579450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.579468 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.683357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.683428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.683446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.683472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.683491 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.746244 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/3.log" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.747412 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/2.log" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.752095 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" exitCode=1 Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.752313 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.753027 4720 scope.go:117] "RemoveContainer" containerID="07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.753819 4720 scope.go:117] "RemoveContainer" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" Feb 02 08:57:30 crc kubenswrapper[4720]: E0202 08:57:30.754252 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.779478 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.788033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.788121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.788144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.788177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.788198 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.802003 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.821271 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.845982 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.863556 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.869041 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:02:34.429483039 +0000 UTC Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.882066 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.886428 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.886464 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.886526 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:30 crc kubenswrapper[4720]: E0202 08:57:30.886605 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:30 crc kubenswrapper[4720]: E0202 08:57:30.886766 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:30 crc kubenswrapper[4720]: E0202 08:57:30.886933 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.892157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.892196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.892207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.892226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.892242 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.906308 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.924393 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.942439 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.959552 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.966279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:57:30 crc kubenswrapper[4720]: E0202 08:57:30.966471 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.966427824 +0000 UTC m=+148.822053420 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.978611 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.996671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.996715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.996727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.996751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:30 crc kubenswrapper[4720]: I0202 08:57:30.996772 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:30Z","lastTransitionTime":"2026-02-02T08:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.002685 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:30Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.028451 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c6d59cc910c18e3b34255a7e6f04b0a77595315922f34ef83e023519064704\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:56:59Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0202 08:56:58.990396 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0202 08:56:58.990458 6375 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0202 08:56:58.990462 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0202 08:56:58.990448 6375 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0202 08:56:58.990485 6375 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0202 08:56:58.990494 6375 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0202 08:56:58.990457 6375 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:29Z\\\",\\\"message\\\":\\\"tory.egressNode crc took: 3.982888ms\\\\nI0202 08:57:29.933823 6801 factory.go:1336] Added *v1.Node event handler 7\\\\nI0202 08:57:29.933872 6801 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0202 08:57:29.933947 6801 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:57:29.933983 6801 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:57:29.934040 6801 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 08:57:29.934066 6801 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:57:29.934106 6801 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 08:57:29.934122 6801 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:57:29.934208 6801 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 08:57:29.934249 6801 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 08:57:29.934285 6801 factory.go:656] Stopping watch factory\\\\nI0202 08:57:29.934308 6801 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 08:57:29.934316 6801 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 08:57:29.934359 6801 ovnkube.go:599] Stopped ovnkube\\\\nI0202 08:57:29.934390 6801 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 08:57:29.934477 6801 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.046069 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.067650 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.068139 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.067784 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.067950 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068560 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.068510944 +0000 UTC m=+148.924136540 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068561 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068618 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068646 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068709 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068765 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068788 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068730 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.068702488 +0000 UTC m=+148.924328074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.068927 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.068910203 +0000 UTC m=+148.924535789 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.068343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.069617 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.069754 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.069871 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.069843386 +0000 UTC m=+148.925468982 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.090104 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.100043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.100099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.100112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.100137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.100157 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.112934 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.203513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.203594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.203614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.203644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.203664 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.307755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.307857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.307926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.307971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.308001 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.415599 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.415693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.415720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.415755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.415780 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.519865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.519964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.519978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.520004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.520018 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.623853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.623995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.624020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.624049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.624066 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.728354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.728437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.728458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.728489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.728508 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.761148 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/3.log" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.774202 4720 scope.go:117] "RemoveContainer" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.774483 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.797454 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.824593 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.832938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.833008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.833028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.833055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.833075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.848999 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.869252 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:51:15.187378008 +0000 UTC Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.872055 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.886641 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:31 crc kubenswrapper[4720]: E0202 08:57:31.887778 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.894207 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.919915 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.936653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.936723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.936749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.936781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.936805 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:31Z","lastTransitionTime":"2026-02-02T08:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.941140 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.964057 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:31 crc kubenswrapper[4720]: I0202 08:57:31.987711 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:31Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.007968 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.030731 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:29Z\\\",\\\"message\\\":\\\"tory.egressNode crc took: 3.982888ms\\\\nI0202 08:57:29.933823 6801 factory.go:1336] Added *v1.Node event handler 7\\\\nI0202 08:57:29.933872 6801 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0202 08:57:29.933947 6801 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:57:29.933983 6801 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:57:29.934040 6801 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 08:57:29.934066 6801 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:57:29.934106 6801 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 08:57:29.934122 6801 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:57:29.934208 6801 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 08:57:29.934249 6801 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 08:57:29.934285 6801 factory.go:656] Stopping watch factory\\\\nI0202 08:57:29.934308 6801 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 08:57:29.934316 6801 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 08:57:29.934359 6801 ovnkube.go:599] Stopped ovnkube\\\\nI0202 08:57:29.934390 6801 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 08:57:29.934477 6801 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.039803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.039904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.039925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.039952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.039969 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.055985 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.076706 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.094459 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.108969 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.129810 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.143428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.143631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.143693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.143762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.143823 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.146675 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:32Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.247528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.247625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.247648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.247681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.247704 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.352529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.352957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.353030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.353101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.353179 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.457502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.457575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.457594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.457623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.457647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.560791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.560838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.560850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.560867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.560876 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.663292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.663325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.663335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.663350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.663361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.766558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.766598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.766606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.766622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.766633 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.869670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.869658 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:58:45.84392306 +0000 UTC Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.869742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.869809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.869838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.869852 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.886231 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.886248 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:32 crc kubenswrapper[4720]: E0202 08:57:32.886411 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:32 crc kubenswrapper[4720]: E0202 08:57:32.886606 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.887171 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:32 crc kubenswrapper[4720]: E0202 08:57:32.887422 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.973817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.973969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.973990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.974019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:32 crc kubenswrapper[4720]: I0202 08:57:32.974037 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:32Z","lastTransitionTime":"2026-02-02T08:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.078176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.078269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.078293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.078330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.078353 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.181616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.181688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.181710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.181738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.181758 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.286308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.286390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.286411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.286445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.286468 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.391167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.391347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.391369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.391431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.391452 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.495429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.495499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.495516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.495550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.495568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.599226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.599791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.600087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.600268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.600461 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.704414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.704495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.704513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.704543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.704562 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.807507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.807964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.808127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.808278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.808422 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.870607 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:35:07.139852615 +0000 UTC Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.886061 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:33 crc kubenswrapper[4720]: E0202 08:57:33.886501 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.912319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.912381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.912400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.912431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:33 crc kubenswrapper[4720]: I0202 08:57:33.912455 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:33Z","lastTransitionTime":"2026-02-02T08:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.016558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.016633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.016651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.016678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.016700 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.120774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.120835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.120855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.120931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.120953 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.224467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.224557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.224586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.224620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.224648 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.328673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.328734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.328745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.328768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.328778 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.432819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.432925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.432950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.432982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.433002 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.536869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.536959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.536977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.537011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.537035 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.640656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.640759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.640783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.640808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.640827 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.745314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.746042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.746065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.746100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.746125 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.849948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.850036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.850063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.850099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.850140 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.870833 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:04:31.83777471 +0000 UTC Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.886615 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.886728 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.886631 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:34 crc kubenswrapper[4720]: E0202 08:57:34.887170 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:34 crc kubenswrapper[4720]: E0202 08:57:34.887573 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:34 crc kubenswrapper[4720]: E0202 08:57:34.887686 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.953780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.953833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.953842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.953897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:34 crc kubenswrapper[4720]: I0202 08:57:34.953910 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:34Z","lastTransitionTime":"2026-02-02T08:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.039389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.039463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.039480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.039504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.039520 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.055649 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.062409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.062463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.062476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.062498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.062514 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.082954 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.089003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.089085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.089105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.089137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.089259 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.111467 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.118509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.118573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.118596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.118623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.118641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.149546 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.156955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.157409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.157568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.157728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.157942 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.180367 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:35Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.180604 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.183550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.183597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.183617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.183646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.183669 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.288206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.288776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.289056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.289266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.289433 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.394031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.394147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.394167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.394195 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.394216 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.497483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.497995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.498232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.498412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.498574 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.601657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.602180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.602322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.602478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.602624 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.706629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.706691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.706708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.706734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.706752 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.810006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.810053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.810065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.810087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.810098 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.871281 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:29:31.311242323 +0000 UTC Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.885842 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:35 crc kubenswrapper[4720]: E0202 08:57:35.886113 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.913529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.913621 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.913641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.913674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:35 crc kubenswrapper[4720]: I0202 08:57:35.913700 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:35Z","lastTransitionTime":"2026-02-02T08:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.016992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.017058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.017077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.017109 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.017134 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.120674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.120768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.120795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.120829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.120850 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.224584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.224659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.224677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.224704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.224722 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.328682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.328824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.328846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.328935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.328963 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.433065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.433152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.433176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.433210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.433231 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.536151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.536228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.536244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.536272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.536291 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.640958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.641035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.641053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.641087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.641108 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.745629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.745735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.745771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.745811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.745837 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.849446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.849529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.849557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.849587 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.849607 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.872424 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:22:20.632874005 +0000 UTC Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.886129 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.886235 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:36 crc kubenswrapper[4720]: E0202 08:57:36.886438 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.886543 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:36 crc kubenswrapper[4720]: E0202 08:57:36.886736 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:36 crc kubenswrapper[4720]: E0202 08:57:36.887317 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.911703 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7927e1ecc56ac45b3bf4ef3bdec4552f5f1444a5e40c7d2130ddbd64e97c4e9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e1665b1a3fa4d742c0a4a0e678b8f6eae49d37c97eea67c24eed45778df47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.929758 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t6hpn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d161a80a-b09b-456a-a1f7-2fabcf16d4fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9196fa3d9a97544f2fc72d397fcae0dc7c920e2760453ff6c6a00329753004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9jlc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t6hpn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.950619 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c249917a-a18b-49c1-807a-3c567ea9952a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301f7fdd066fa23b581056c68423fb9bdeb29007eb7e9e02acaf00c232f022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfab339a83ff46709578d34b70a19fde594f7b9a8bc04a69e82aceed9568216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzg9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zzpkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.954066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.954150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.954225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.954270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.954298 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:36Z","lastTransitionTime":"2026-02-02T08:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:36 crc kubenswrapper[4720]: I0202 08:57:36.978475 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1e9a68b-f276-49cc-93d3-cc5201783946\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15a03e51d0f20f729b85c1746843540f26f916a1418c919f248813e4c7eb95e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d14f618ee04a1e5d762875c1198b595b0a17021caeff0120a99292095c2ab6c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bff196e3ebb55b20ba8205e7691b890225a765e22346d40edf625914bcb6ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:36Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.005705 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e876433589993a04890ea3f4d02bdd4a711595fdfa86407cf76094093b7c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.027002 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7553eff4303d6cd171d80190938c4a27306edab43dd067388f6f827b70a4496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.057539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.057585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.057600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.057620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.057635 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.067850 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:29Z\\\",\\\"message\\\":\\\"tory.egressNode crc took: 3.982888ms\\\\nI0202 08:57:29.933823 6801 factory.go:1336] Added *v1.Node event handler 7\\\\nI0202 08:57:29.933872 6801 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0202 08:57:29.933947 6801 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 08:57:29.933983 6801 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 08:57:29.934040 6801 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 08:57:29.934066 6801 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 08:57:29.934106 6801 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 08:57:29.934122 6801 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 08:57:29.934208 6801 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 08:57:29.934249 6801 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 08:57:29.934285 6801 factory.go:656] Stopping watch factory\\\\nI0202 08:57:29.934308 6801 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 08:57:29.934316 6801 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 08:57:29.934359 6801 ovnkube.go:599] Stopped ovnkube\\\\nI0202 08:57:29.934390 6801 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 08:57:29.934477 6801 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:57:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjcd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mrwzp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.085565 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37eb17d6-3474-4c16-aa20-cc508c7992fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l4gh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9qlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.101973 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.125780 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ft6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T08:57:15Z\\\",\\\"message\\\":\\\"2026-02-02T08:56:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8\\\\n2026-02-02T08:56:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6cd3538c-3d4d-402e-9981-e80902f522b8 to /host/opt/cni/bin/\\\\n2026-02-02T08:56:30Z [verbose] multus-daemon started\\\\n2026-02-02T08:56:30Z [verbose] Readiness Indicator file check\\\\n2026-02-02T08:57:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjctx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ft6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.152213 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dc284b2-dbff-4c9d-9fd9-1c1d0381eb8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4e86a2db6aa782b9ea074acddc64d44965acbd0bf0c6860e21b56d4c373610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://905a64f1a82dcb142ad69284c7a753e3b70fce99cd3c3569977cc46d65ca2376\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93947047f87a424880e56808239935ed421dbfb19d6ac62d40eacd407188db59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844a0dad03041d0295ed4678daaa576a2b36d7345c3575e556ef167977bfb3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a34e32f88096cc9d537fef8f1ce6c7127f06c41d16fb236b1cfb500fbec767b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b95312a2e97bdfaff90a664e143fd39ec6435e4538df6b839ad21aa23892fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06516730c2a4ceece9c6f1052a97b3e8e82bbe3b403f051ab987167b48cdd03e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwbjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lw7ql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.162422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.162482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.162541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.162579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.162602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.172321 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0342796d-ac1a-4cfa-8666-1c772eab1ed2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11eeec1011cb0e65411d329a35a1b487fdc35e9ada18973976c6313b02e38d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sm5gb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8l7nw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.204572 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.237682 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n258j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdad4980-ba8c-4eae-a74f-04ae0aa67a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b40db863b536838ea394411ca91a4a58ecd111fab82ae558817e3d71a9ce3c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fp7mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n258j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.262536 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99f2e153-a112-4dea-97b9-a401b1fed68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T08:56:21Z\\\",\\\"message\\\":\\\"W0202 08:56:10.264471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 08:56:10.265121 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770022570 cert, and key in /tmp/serving-cert-1891324828/serving-signer.crt, /tmp/serving-cert-1891324828/serving-signer.key\\\\nI0202 08:56:10.469821 1 observer_polling.go:159] Starting file observer\\\\nW0202 08:56:10.473625 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 08:56:10.474085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 08:56:10.477291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1891324828/tls.crt::/tmp/serving-cert-1891324828/tls.key\\\\\\\"\\\\nF0202 08:56:21.157229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.265317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.265391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.265414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.265442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.265464 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.277831 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bef65bab-dcbe-43cc-b2fc-2e621b03c14e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc4b32293dd5b955a078e782c742d1430fff9b08b1de64a25d3e56df3cd01cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbeffe388cca7681ca782967eeaf872f4ad71f6fb52739bbe1435dfe32ee5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0946700c6947dd68fba9d4fb362046a012fbdfceee4423d4a587cc584ea06ddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T08:56:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bacb6185062b5dd103c1cc075849eb0d3cb9375c56c98b07db8a7a463c7b975d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T08:56:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T08:56:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T08:56:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.293549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T08:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:37Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.368996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.369048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.369065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.369090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.369110 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.472832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.472980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.473011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.473053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.473081 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.576595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.576760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.576783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.576812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.576835 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.679782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.679853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.679873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.679928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.679949 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.786936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.786976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.786987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.787004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.787015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.873287 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:58:09.878606644 +0000 UTC Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.886807 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:37 crc kubenswrapper[4720]: E0202 08:57:37.887058 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.890137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.890185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.890206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.890232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.890251 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.992510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.992575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.992595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.992623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:37 crc kubenswrapper[4720]: I0202 08:57:37.992645 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:37Z","lastTransitionTime":"2026-02-02T08:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.095367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.095460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.095483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.095519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.095545 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.199035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.199092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.199105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.199132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.199148 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.301448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.301496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.301508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.301528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.301544 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.404283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.404335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.404351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.404370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.404384 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.508224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.508261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.508271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.508288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.508299 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.612355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.612421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.612438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.612467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.612486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.716212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.716289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.716306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.716333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.716351 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.819511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.819569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.819579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.819600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.819615 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.873729 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:56:58.169701844 +0000 UTC Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.886218 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.886306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.886235 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:38 crc kubenswrapper[4720]: E0202 08:57:38.886426 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:38 crc kubenswrapper[4720]: E0202 08:57:38.886577 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:38 crc kubenswrapper[4720]: E0202 08:57:38.886805 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.929151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.929214 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.929226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.929246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:38 crc kubenswrapper[4720]: I0202 08:57:38.929259 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:38Z","lastTransitionTime":"2026-02-02T08:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.032004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.032054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.032065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.032085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.032137 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.135727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.135784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.135802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.135829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.135849 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.239010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.239086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.239106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.239135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.239155 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.342161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.342204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.342220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.342241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.342253 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.446660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.446733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.446753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.446780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.446804 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.550531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.550659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.550681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.550709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.550729 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.655113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.655205 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.655240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.655279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.655302 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.759605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.759667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.759686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.759716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.759735 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.863645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.863724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.863750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.863782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.863808 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.874209 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:16:30.348140446 +0000 UTC Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.885794 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:39 crc kubenswrapper[4720]: E0202 08:57:39.886029 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.967692 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.967753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.967773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.967800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:39 crc kubenswrapper[4720]: I0202 08:57:39.967819 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:39Z","lastTransitionTime":"2026-02-02T08:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.071119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.071194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.071215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.071246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.071267 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.174757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.174819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.174834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.174860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.174873 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.278137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.278210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.278235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.278267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.278291 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.382095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.382152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.382164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.382186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.382200 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.485156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.485210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.485224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.485247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.485262 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.588799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.588926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.588947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.588975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.588993 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.692627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.692685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.692704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.692730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.692755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.796337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.796415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.796437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.796463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.796483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.874847 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:56:03.260186425 +0000 UTC Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.886231 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.886329 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:40 crc kubenswrapper[4720]: E0202 08:57:40.886692 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.886741 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:40 crc kubenswrapper[4720]: E0202 08:57:40.887002 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:40 crc kubenswrapper[4720]: E0202 08:57:40.887107 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.900831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.900926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.900945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.900972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.900991 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:40Z","lastTransitionTime":"2026-02-02T08:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:40 crc kubenswrapper[4720]: I0202 08:57:40.904991 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.004151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.004254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.004288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.004320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.004342 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.108267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.108325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.108342 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.108368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.108387 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.211367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.211428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.211444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.211472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.211493 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.314724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.314802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.314827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.314929 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.314958 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.418711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.418772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.418795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.418819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.418837 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.522568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.522664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.522690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.522729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.522758 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.626163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.626231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.626248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.626280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.626302 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.731727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.731810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.731837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.731928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.731959 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.834659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.834730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.834794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.834825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.834845 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.875333 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:16:44.288639351 +0000 UTC Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.886749 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:41 crc kubenswrapper[4720]: E0202 08:57:41.887098 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.938063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.938163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.938191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.938229 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:41 crc kubenswrapper[4720]: I0202 08:57:41.938255 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:41Z","lastTransitionTime":"2026-02-02T08:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.041990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.042090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.042117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.042151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.042172 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.145607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.145682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.145700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.145727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.145746 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.249031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.249113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.249136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.249169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.249195 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.352411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.352484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.352503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.352530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.352553 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.457753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.457810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.457829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.457854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.458011 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.560834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.560928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.560949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.560980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.561002 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.665621 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.665691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.665710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.665743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.665763 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.769236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.769360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.769384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.769418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.769439 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.873095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.873159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.873177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.873209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.873227 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.876314 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:10:17.108464749 +0000 UTC Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.886820 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.887006 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:42 crc kubenswrapper[4720]: E0202 08:57:42.887146 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:42 crc kubenswrapper[4720]: E0202 08:57:42.887293 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.887561 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:42 crc kubenswrapper[4720]: E0202 08:57:42.887917 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.977024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.977103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.977130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.977162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:42 crc kubenswrapper[4720]: I0202 08:57:42.977184 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:42Z","lastTransitionTime":"2026-02-02T08:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.080593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.080652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.080665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.080690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.080703 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.184670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.184771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.184796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.184834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.184860 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.288548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.288620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.288640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.288667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.288686 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.394074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.394188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.394223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.394263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.394289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.497556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.497625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.497644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.497671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.497691 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.601021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.601078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.601100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.601124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.601144 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.704332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.704414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.704433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.704464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.704484 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.808084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.808161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.808188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.808227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.808257 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.876663 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:29:16.750865372 +0000 UTC Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.886114 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:43 crc kubenswrapper[4720]: E0202 08:57:43.886321 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.911728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.911800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.911812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.911834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:43 crc kubenswrapper[4720]: I0202 08:57:43.911847 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:43Z","lastTransitionTime":"2026-02-02T08:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.015399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.015480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.015503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.015535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.015557 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.120741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.120811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.120828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.120856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.120874 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.224710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.224787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.224808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.224838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.224859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.327924 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.327986 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.328007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.328032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.328051 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.432667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.432739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.432751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.432773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.432796 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.536354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.536462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.536488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.536526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.536552 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.640081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.640148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.640171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.640206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.640229 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.742827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.742862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.742871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.742901 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.742911 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.844999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.845061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.845080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.845110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.845128 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.877214 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:48:09.434474938 +0000 UTC Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.886671 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.886703 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.886762 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:44 crc kubenswrapper[4720]: E0202 08:57:44.886861 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:44 crc kubenswrapper[4720]: E0202 08:57:44.887027 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:44 crc kubenswrapper[4720]: E0202 08:57:44.887171 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.888219 4720 scope.go:117] "RemoveContainer" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" Feb 02 08:57:44 crc kubenswrapper[4720]: E0202 08:57:44.888424 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.949639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.949700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.949711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.949740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:44 crc kubenswrapper[4720]: I0202 08:57:44.949756 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:44Z","lastTransitionTime":"2026-02-02T08:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.053800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.053945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.053965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.053990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.054006 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.157730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.157798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.157815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.157841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.157862 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.260728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.260792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.260808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.260831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.260845 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.276528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.276564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.276576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.276591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.276602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.302004 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.308447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.308550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.308568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.308628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.308651 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.333832 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.340576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.340666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.340687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.340716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.340739 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.364664 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.371150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.371221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.371239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.371267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.371283 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.387745 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.394480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.394554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.394572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.394604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.394626 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.411272 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T08:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"10ddc092-4c99-4e64-a9bb-9df8e5d5980d\\\",\\\"systemUUID\\\":\\\"8eba435a-7b37-4df4-91be-d95f0b76d6c8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T08:57:45Z is after 2025-08-24T17:21:41Z" Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.411694 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.414461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.414519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.414538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.414568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.414588 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.517830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.517895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.517908 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.517926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.517938 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.620897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.620948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.620963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.620985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.620999 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.724394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.724534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.724557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.724624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.724647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.828310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.828920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.829121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.829319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.829488 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.877792 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:30:50.459414927 +0000 UTC Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.886313 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:45 crc kubenswrapper[4720]: E0202 08:57:45.887418 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.933323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.933408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.933426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.933455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:45 crc kubenswrapper[4720]: I0202 08:57:45.933476 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:45Z","lastTransitionTime":"2026-02-02T08:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.036788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.036850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.036871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.036997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.037098 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.140816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.140938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.140974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.141004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.141023 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.246330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.246414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.246433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.246461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.246479 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.350253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.350309 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.350318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.350338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.350350 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.370179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:46 crc kubenswrapper[4720]: E0202 08:57:46.370496 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:57:46 crc kubenswrapper[4720]: E0202 08:57:46.370639 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs podName:37eb17d6-3474-4c16-aa20-cc508c7992fc nodeName:}" failed. No retries permitted until 2026-02-02 08:58:50.370596852 +0000 UTC m=+164.226222588 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs") pod "network-metrics-daemon-9qlsb" (UID: "37eb17d6-3474-4c16-aa20-cc508c7992fc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.454145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.454217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.454236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.454265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.454288 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.558329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.558400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.558421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.558454 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.558476 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.662532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.662616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.662633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.662663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.662685 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.766141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.766202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.766219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.766243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.766262 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.870229 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.870324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.870356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.870394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.870420 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.878656 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:30:08.097572778 +0000 UTC Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.886341 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.886472 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.886589 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:46 crc kubenswrapper[4720]: E0202 08:57:46.886530 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:46 crc kubenswrapper[4720]: E0202 08:57:46.886725 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:46 crc kubenswrapper[4720]: E0202 08:57:46.886960 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.926970 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.92694133 podStartE2EDuration="1m18.92694133s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:46.926698124 +0000 UTC m=+100.782323720" watchObservedRunningTime="2026-02-02 08:57:46.92694133 +0000 UTC m=+100.782566896" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.977656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.977748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.977774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.977809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:46 crc kubenswrapper[4720]: I0202 08:57:46.977834 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:46Z","lastTransitionTime":"2026-02-02T08:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.022283 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-t6hpn" podStartSLOduration=80.022253543 podStartE2EDuration="1m20.022253543s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.021590966 +0000 UTC m=+100.877216572" watchObservedRunningTime="2026-02-02 08:57:47.022253543 +0000 UTC m=+100.877879109" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.039289 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zzpkv" podStartSLOduration=79.03922747 podStartE2EDuration="1m19.03922747s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.038472921 +0000 UTC m=+100.894098487" watchObservedRunningTime="2026-02-02 08:57:47.03922747 +0000 UTC m=+100.894853036" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.081700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.081776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.081794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.081817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.081834 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.104826 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ft6vx" podStartSLOduration=80.104800132 podStartE2EDuration="1m20.104800132s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.078830783 +0000 UTC m=+100.934456339" watchObservedRunningTime="2026-02-02 08:57:47.104800132 +0000 UTC m=+100.960425698" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.144088 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lw7ql" podStartSLOduration=80.144059817 podStartE2EDuration="1m20.144059817s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.104559466 +0000 UTC m=+100.960185042" watchObservedRunningTime="2026-02-02 08:57:47.144059817 +0000 UTC m=+100.999685373" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.185194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.185265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.185285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.185314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.185337 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.185608 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.185587918 podStartE2EDuration="7.185587918s" podCreationTimestamp="2026-02-02 08:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.169349979 +0000 UTC m=+101.024975555" watchObservedRunningTime="2026-02-02 08:57:47.185587918 +0000 UTC m=+101.041213514" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.202278 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podStartSLOduration=80.202247677 podStartE2EDuration="1m20.202247677s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.201511609 +0000 UTC m=+101.057137205" watchObservedRunningTime="2026-02-02 08:57:47.202247677 +0000 UTC m=+101.057873233" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.225550 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.225521399 podStartE2EDuration="1m22.225521399s" podCreationTimestamp="2026-02-02 08:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.225412697 +0000 UTC m=+101.081038253" watchObservedRunningTime="2026-02-02 08:57:47.225521399 +0000 UTC m=+101.081146955" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.242396 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.242368503 podStartE2EDuration="46.242368503s" podCreationTimestamp="2026-02-02 08:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.242056766 +0000 UTC m=+101.097682342" watchObservedRunningTime="2026-02-02 08:57:47.242368503 +0000 UTC m=+101.097994069" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.287873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.287949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.287962 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.287982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.287994 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.390310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.390365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.390378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.390394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.390406 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.494295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.494369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.494392 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.494424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.494447 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.597189 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.597253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.597270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.597297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.597316 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.701086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.701543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.701748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.701965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.702138 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.806049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.806430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.806774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.806990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.807148 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.879543 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:57:23.269502635 +0000 UTC Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.885917 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:47 crc kubenswrapper[4720]: E0202 08:57:47.886304 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.910327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.910374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.910416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.910437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:47 crc kubenswrapper[4720]: I0202 08:57:47.910451 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:47Z","lastTransitionTime":"2026-02-02T08:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.014115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.014163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.014173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.014191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.014205 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.117487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.117576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.117596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.117633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.117653 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.221525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.221616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.221676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.221702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.221756 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.325952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.326383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.326723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.326920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.327107 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.430905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.430951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.430963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.430982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.430999 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.534740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.534815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.534834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.534862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.534915 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.638501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.638685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.638714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.638751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.638774 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.742938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.743425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.743517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.743623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.743698 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.847618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.847691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.847715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.847744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.847766 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.879970 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:10:01.937401118 +0000 UTC Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.886534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.886542 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.886762 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:48 crc kubenswrapper[4720]: E0202 08:57:48.886987 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:48 crc kubenswrapper[4720]: E0202 08:57:48.887258 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:48 crc kubenswrapper[4720]: E0202 08:57:48.887447 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.950966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.951307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.951411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.951518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:48 crc kubenswrapper[4720]: I0202 08:57:48.951611 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:48Z","lastTransitionTime":"2026-02-02T08:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.055250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.055677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.055945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.056149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.056300 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.158808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.158865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.158916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.158950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.158972 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.330006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.330383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.330584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.330744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.330911 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.434824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.435858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.436046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.436209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.436332 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.539844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.540399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.540556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.540715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.540930 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.644393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.644471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.644487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.644517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.644536 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.748828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.748877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.748933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.748952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.748970 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.851660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.851758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.851785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.851821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.851850 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.880706 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:01:43.766627606 +0000 UTC Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.886104 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:49 crc kubenswrapper[4720]: E0202 08:57:49.886417 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.955212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.955279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.955297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.955324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:49 crc kubenswrapper[4720]: I0202 08:57:49.955344 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:49Z","lastTransitionTime":"2026-02-02T08:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.059668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.059720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.059732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.059755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.059770 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.163075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.163163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.163187 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.163219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.163239 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.266989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.267077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.267104 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.267130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.267148 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.370292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.370349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.370358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.370376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.370387 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.474250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.474380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.474396 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.474421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.474440 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.577991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.578051 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.578068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.578092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.578110 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.680726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.680801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.680818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.680847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.680868 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.784758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.784839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.784858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.784976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.785003 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.880938 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:05:44.551501021 +0000 UTC Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.886562 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.886585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.887099 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:50 crc kubenswrapper[4720]: E0202 08:57:50.887307 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:50 crc kubenswrapper[4720]: E0202 08:57:50.887408 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:50 crc kubenswrapper[4720]: E0202 08:57:50.887492 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.888317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.888412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.888438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.888477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.888504 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.908307 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n258j" podStartSLOduration=83.908262433 podStartE2EDuration="1m23.908262433s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:47.267573823 +0000 UTC m=+101.123199379" watchObservedRunningTime="2026-02-02 08:57:50.908262433 +0000 UTC m=+104.763888029" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.910551 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.994335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.994398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.994412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.994434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:50 crc kubenswrapper[4720]: I0202 08:57:50.994448 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:50Z","lastTransitionTime":"2026-02-02T08:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.097990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.098094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.098115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.098160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.098194 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.201771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.201833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.201856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.201917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.201940 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.305430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.305518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.305545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.305575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.305597 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.409024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.409084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.409100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.409125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.409142 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.513515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.513588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.513608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.513637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.513655 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.616447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.616542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.616628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.616658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.616675 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.719985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.720028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.720044 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.720064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.720081 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.822500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.822571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.822594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.822622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.822647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.881936 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:19:53.130896623 +0000 UTC Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.886277 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:51 crc kubenswrapper[4720]: E0202 08:57:51.886436 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.926245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.926309 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.926326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.926351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:51 crc kubenswrapper[4720]: I0202 08:57:51.926372 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:51Z","lastTransitionTime":"2026-02-02T08:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.029495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.029576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.029605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.029640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.029663 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.133648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.133747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.133771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.133804 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.133827 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.237148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.237220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.237239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.237266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.237289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.340419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.340494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.340517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.340549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.340593 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.443738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.443828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.443853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.443922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.443950 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.547753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.547839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.547866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.547948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.547976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.651133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.651211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.651231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.651258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.651280 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.754271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.754320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.754328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.754345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.754356 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.858474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.858543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.858560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.858590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.858618 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.882476 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:03:57.825120491 +0000 UTC Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.887017 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.887068 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.887029 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:52 crc kubenswrapper[4720]: E0202 08:57:52.887441 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:52 crc kubenswrapper[4720]: E0202 08:57:52.887669 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:52 crc kubenswrapper[4720]: E0202 08:57:52.887823 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.962109 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.962172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.962186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.962207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:52 crc kubenswrapper[4720]: I0202 08:57:52.962221 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:52Z","lastTransitionTime":"2026-02-02T08:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.065809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.065867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.065900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.065923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.065936 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.169252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.169328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.169346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.169372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.169391 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.272351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.272427 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.272448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.272473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.272493 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.376113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.376222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.376253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.376280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.376301 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.479937 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.480017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.480041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.480079 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.480105 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.582922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.582982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.582999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.583022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.583040 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.686116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.686202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.686235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.686269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.686294 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.789720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.789805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.789823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.789852 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.789872 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.883533 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:51:53.72325601 +0000 UTC Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.885815 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:53 crc kubenswrapper[4720]: E0202 08:57:53.886074 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.893395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.893465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.893483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.893510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.893527 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.996988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.997077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.997095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.997121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:53 crc kubenswrapper[4720]: I0202 08:57:53.997139 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:53Z","lastTransitionTime":"2026-02-02T08:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.099752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.099923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.099946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.099975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.099992 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.203241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.203350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.203370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.203395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.203413 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.307430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.307494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.307512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.307536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.307555 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.410684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.410752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.410769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.410793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.410811 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.515111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.515192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.515215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.515243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.515263 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.618777 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.618838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.618860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.618933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.618966 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.722169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.722235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.722247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.722266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.722278 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.826061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.826122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.826141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.826165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.826183 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.884523 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:24:34.797747358 +0000 UTC Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.886970 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.887191 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.887432 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:54 crc kubenswrapper[4720]: E0202 08:57:54.887415 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:54 crc kubenswrapper[4720]: E0202 08:57:54.887679 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:54 crc kubenswrapper[4720]: E0202 08:57:54.888269 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.929113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.929184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.929204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.929231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:54 crc kubenswrapper[4720]: I0202 08:57:54.929250 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:54Z","lastTransitionTime":"2026-02-02T08:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.032575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.032643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.032664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.032697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.032719 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.136060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.136130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.136148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.136176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.136194 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.239174 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.239252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.239273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.239304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.239325 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.342560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.342641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.342666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.342700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.342723 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.447039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.447114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.447140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.447195 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.447231 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.550358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.550440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.550456 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.550480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.550496 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.653654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.653707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.653721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.653744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.653755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.668633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.668686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.668695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.668713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.668724 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T08:57:55Z","lastTransitionTime":"2026-02-02T08:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.738272 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72"] Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.738719 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.743564 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.743785 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.744339 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.744337 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.794271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60ec511f-cc75-49a0-930c-a8fea0484cc7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.794386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ec511f-cc75-49a0-930c-a8fea0484cc7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.794418 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60ec511f-cc75-49a0-930c-a8fea0484cc7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.794693 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60ec511f-cc75-49a0-930c-a8fea0484cc7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.795064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ec511f-cc75-49a0-930c-a8fea0484cc7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.798600 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.798572612 podStartE2EDuration="5.798572612s" podCreationTimestamp="2026-02-02 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:55.798440749 +0000 UTC m=+109.654066355" watchObservedRunningTime="2026-02-02 08:57:55.798572612 +0000 UTC m=+109.654198258" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.885502 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:38:21.073706666 +0000 UTC Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.885564 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.886374 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:55 crc kubenswrapper[4720]: E0202 08:57:55.886685 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.897439 4720 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60ec511f-cc75-49a0-930c-a8fea0484cc7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899196 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ec511f-cc75-49a0-930c-a8fea0484cc7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899246 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60ec511f-cc75-49a0-930c-a8fea0484cc7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60ec511f-cc75-49a0-930c-a8fea0484cc7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899326 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ec511f-cc75-49a0-930c-a8fea0484cc7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899406 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/60ec511f-cc75-49a0-930c-a8fea0484cc7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.899489 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/60ec511f-cc75-49a0-930c-a8fea0484cc7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.900860 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/60ec511f-cc75-49a0-930c-a8fea0484cc7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.910856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ec511f-cc75-49a0-930c-a8fea0484cc7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:55 crc kubenswrapper[4720]: I0202 08:57:55.930220 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ec511f-cc75-49a0-930c-a8fea0484cc7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qgx72\" (UID: \"60ec511f-cc75-49a0-930c-a8fea0484cc7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.061694 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.885802 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" event={"ID":"60ec511f-cc75-49a0-930c-a8fea0484cc7","Type":"ContainerStarted","Data":"c8d2b3db5bc031812c556825ee86032628a8de01d1ce28e1c4e1d272828be7c7"} Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.885913 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" event={"ID":"60ec511f-cc75-49a0-930c-a8fea0484cc7","Type":"ContainerStarted","Data":"8f81ce92ecd19bd7fb0d1fecd2f3cf59c09bfab5dca686203c5fc4d57acb7da7"} Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.886041 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.886140 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.886174 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:56 crc kubenswrapper[4720]: E0202 08:57:56.888499 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:56 crc kubenswrapper[4720]: E0202 08:57:56.888828 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:56 crc kubenswrapper[4720]: E0202 08:57:56.888376 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:56 crc kubenswrapper[4720]: I0202 08:57:56.911258 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qgx72" podStartSLOduration=89.911237755 podStartE2EDuration="1m29.911237755s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:57:56.909614965 +0000 UTC m=+110.765240551" watchObservedRunningTime="2026-02-02 08:57:56.911237755 +0000 UTC m=+110.766863341" Feb 02 08:57:57 crc kubenswrapper[4720]: I0202 08:57:57.886655 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:57 crc kubenswrapper[4720]: E0202 08:57:57.886844 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:58 crc kubenswrapper[4720]: I0202 08:57:58.886673 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:57:58 crc kubenswrapper[4720]: I0202 08:57:58.886715 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:57:58 crc kubenswrapper[4720]: E0202 08:57:58.886998 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:57:58 crc kubenswrapper[4720]: I0202 08:57:58.886860 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:57:58 crc kubenswrapper[4720]: E0202 08:57:58.887272 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:57:58 crc kubenswrapper[4720]: E0202 08:57:58.887495 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:57:59 crc kubenswrapper[4720]: I0202 08:57:59.886507 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:57:59 crc kubenswrapper[4720]: E0202 08:57:59.886989 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:57:59 crc kubenswrapper[4720]: I0202 08:57:59.887171 4720 scope.go:117] "RemoveContainer" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" Feb 02 08:57:59 crc kubenswrapper[4720]: E0202 08:57:59.887353 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mrwzp_openshift-ovn-kubernetes(8f50847b-84da-40bb-9cc3-7ddb139f6c0e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" Feb 02 08:58:00 crc kubenswrapper[4720]: I0202 08:58:00.886358 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:00 crc kubenswrapper[4720]: I0202 08:58:00.886503 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:00 crc kubenswrapper[4720]: E0202 08:58:00.886591 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:00 crc kubenswrapper[4720]: I0202 08:58:00.886354 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:00 crc kubenswrapper[4720]: E0202 08:58:00.886716 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:00 crc kubenswrapper[4720]: E0202 08:58:00.886935 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.886430 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:01 crc kubenswrapper[4720]: E0202 08:58:01.886699 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.907371 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/1.log" Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.909096 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/0.log" Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.909187 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd3c075e-27ea-4a49-b3bc-0bd6ca79c764" containerID="2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36" exitCode=1 Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.909246 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerDied","Data":"2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36"} Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.909309 4720 scope.go:117] "RemoveContainer" containerID="9729255affc578588b8c89a770979dc630a987b4510b5b524647402b52e22d4b" Feb 02 08:58:01 crc kubenswrapper[4720]: I0202 08:58:01.910231 4720 scope.go:117] "RemoveContainer" containerID="2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36" Feb 02 08:58:01 crc kubenswrapper[4720]: E0202 08:58:01.910670 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ft6vx_openshift-multus(cd3c075e-27ea-4a49-b3bc-0bd6ca79c764)\"" pod="openshift-multus/multus-ft6vx" podUID="cd3c075e-27ea-4a49-b3bc-0bd6ca79c764" Feb 02 08:58:02 crc kubenswrapper[4720]: I0202 08:58:02.885951 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:02 crc kubenswrapper[4720]: I0202 08:58:02.886002 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:02 crc kubenswrapper[4720]: E0202 08:58:02.886149 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:02 crc kubenswrapper[4720]: I0202 08:58:02.886182 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:02 crc kubenswrapper[4720]: E0202 08:58:02.886339 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:02 crc kubenswrapper[4720]: E0202 08:58:02.886549 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:02 crc kubenswrapper[4720]: I0202 08:58:02.915502 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/1.log" Feb 02 08:58:03 crc kubenswrapper[4720]: I0202 08:58:03.886824 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:03 crc kubenswrapper[4720]: E0202 08:58:03.887102 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:04 crc kubenswrapper[4720]: I0202 08:58:04.886306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:04 crc kubenswrapper[4720]: I0202 08:58:04.886307 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:04 crc kubenswrapper[4720]: I0202 08:58:04.886675 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:04 crc kubenswrapper[4720]: E0202 08:58:04.886704 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:04 crc kubenswrapper[4720]: E0202 08:58:04.886817 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:04 crc kubenswrapper[4720]: E0202 08:58:04.887066 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:05 crc kubenswrapper[4720]: I0202 08:58:05.886129 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:05 crc kubenswrapper[4720]: E0202 08:58:05.886275 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:06 crc kubenswrapper[4720]: I0202 08:58:06.886512 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:06 crc kubenswrapper[4720]: I0202 08:58:06.886561 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:06 crc kubenswrapper[4720]: I0202 08:58:06.886446 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:06 crc kubenswrapper[4720]: E0202 08:58:06.887609 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:06 crc kubenswrapper[4720]: E0202 08:58:06.887871 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:06 crc kubenswrapper[4720]: E0202 08:58:06.887754 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:06 crc kubenswrapper[4720]: E0202 08:58:06.903334 4720 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 08:58:07 crc kubenswrapper[4720]: E0202 08:58:07.026602 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 08:58:07 crc kubenswrapper[4720]: I0202 08:58:07.886294 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:07 crc kubenswrapper[4720]: E0202 08:58:07.886515 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:08 crc kubenswrapper[4720]: I0202 08:58:08.886733 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:08 crc kubenswrapper[4720]: I0202 08:58:08.886802 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:08 crc kubenswrapper[4720]: I0202 08:58:08.886742 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:08 crc kubenswrapper[4720]: E0202 08:58:08.887052 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:08 crc kubenswrapper[4720]: E0202 08:58:08.887202 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:08 crc kubenswrapper[4720]: E0202 08:58:08.887424 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:09 crc kubenswrapper[4720]: I0202 08:58:09.885841 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:09 crc kubenswrapper[4720]: E0202 08:58:09.886123 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:10 crc kubenswrapper[4720]: I0202 08:58:10.886698 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:10 crc kubenswrapper[4720]: I0202 08:58:10.886770 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:10 crc kubenswrapper[4720]: E0202 08:58:10.886866 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:10 crc kubenswrapper[4720]: E0202 08:58:10.887137 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:10 crc kubenswrapper[4720]: I0202 08:58:10.887497 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:10 crc kubenswrapper[4720]: E0202 08:58:10.887673 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:11 crc kubenswrapper[4720]: I0202 08:58:11.886335 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:11 crc kubenswrapper[4720]: E0202 08:58:11.886553 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:12 crc kubenswrapper[4720]: E0202 08:58:12.028740 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 08:58:12 crc kubenswrapper[4720]: I0202 08:58:12.886825 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:12 crc kubenswrapper[4720]: I0202 08:58:12.887048 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:12 crc kubenswrapper[4720]: E0202 08:58:12.887125 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:12 crc kubenswrapper[4720]: E0202 08:58:12.887235 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:12 crc kubenswrapper[4720]: I0202 08:58:12.886872 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:12 crc kubenswrapper[4720]: E0202 08:58:12.887496 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:13 crc kubenswrapper[4720]: I0202 08:58:13.886651 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:13 crc kubenswrapper[4720]: E0202 08:58:13.886866 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:13 crc kubenswrapper[4720]: I0202 08:58:13.888145 4720 scope.go:117] "RemoveContainer" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.886189 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:14 crc kubenswrapper[4720]: E0202 08:58:14.886396 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.886697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:14 crc kubenswrapper[4720]: E0202 08:58:14.886796 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.887315 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:14 crc kubenswrapper[4720]: E0202 08:58:14.887427 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.888581 4720 scope.go:117] "RemoveContainer" containerID="2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.953626 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9qlsb"] Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.954246 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:14 crc kubenswrapper[4720]: E0202 08:58:14.954357 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.970283 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/3.log" Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.974810 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerStarted","Data":"1199a8ef90482788a5fb7472156bbf633191d6af67369e6da58d8fd34a6aedc0"} Feb 02 08:58:14 crc kubenswrapper[4720]: I0202 08:58:14.976029 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:58:15 crc kubenswrapper[4720]: I0202 08:58:15.982725 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/1.log" Feb 02 08:58:15 crc kubenswrapper[4720]: I0202 08:58:15.982933 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerStarted","Data":"79e315e1e388c3b54029e31eb47747d74c0304a7af58ae56f37f2c4d2e324545"} Feb 02 08:58:16 crc kubenswrapper[4720]: I0202 08:58:16.008299 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podStartSLOduration=109.008239505 podStartE2EDuration="1m49.008239505s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:15.00936658 +0000 UTC m=+128.864992126" watchObservedRunningTime="2026-02-02 08:58:16.008239505 +0000 UTC m=+129.863865101" Feb 02 08:58:16 crc kubenswrapper[4720]: I0202 08:58:16.886506 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:16 crc kubenswrapper[4720]: I0202 08:58:16.886611 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:16 crc kubenswrapper[4720]: I0202 08:58:16.886676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:16 crc kubenswrapper[4720]: E0202 08:58:16.888659 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:16 crc kubenswrapper[4720]: I0202 08:58:16.888713 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:16 crc kubenswrapper[4720]: E0202 08:58:16.888987 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:16 crc kubenswrapper[4720]: E0202 08:58:16.889317 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:16 crc kubenswrapper[4720]: E0202 08:58:16.889392 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:17 crc kubenswrapper[4720]: E0202 08:58:17.029357 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 08:58:18 crc kubenswrapper[4720]: I0202 08:58:18.886403 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:18 crc kubenswrapper[4720]: I0202 08:58:18.886409 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:18 crc kubenswrapper[4720]: E0202 08:58:18.886694 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:18 crc kubenswrapper[4720]: I0202 08:58:18.886362 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:18 crc kubenswrapper[4720]: E0202 08:58:18.886988 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:18 crc kubenswrapper[4720]: I0202 08:58:18.887180 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:18 crc kubenswrapper[4720]: E0202 08:58:18.887199 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:18 crc kubenswrapper[4720]: E0202 08:58:18.887343 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:20 crc kubenswrapper[4720]: I0202 08:58:20.886209 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:20 crc kubenswrapper[4720]: I0202 08:58:20.886259 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:20 crc kubenswrapper[4720]: I0202 08:58:20.886288 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:20 crc kubenswrapper[4720]: I0202 08:58:20.886349 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:20 crc kubenswrapper[4720]: E0202 08:58:20.887048 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9qlsb" podUID="37eb17d6-3474-4c16-aa20-cc508c7992fc" Feb 02 08:58:20 crc kubenswrapper[4720]: E0202 08:58:20.887257 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 08:58:20 crc kubenswrapper[4720]: E0202 08:58:20.887538 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 08:58:20 crc kubenswrapper[4720]: E0202 08:58:20.887742 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.886625 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.886826 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.886872 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.886949 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.891330 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.891669 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.892272 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.892764 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.892997 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 08:58:22 crc kubenswrapper[4720]: I0202 08:58:22.894711 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.573072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.632253 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xz2ts"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.633140 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.636361 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6rzkl"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.637543 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.638932 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5kntx"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.640107 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.644448 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.645441 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.650653 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.651113 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.660915 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.661067 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.661548 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.662552 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.664551 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.664866 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666213 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666572 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666602 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666689 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666607 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666830 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666942 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666941 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.666950 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.667193 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.667617 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668351 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668511 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668595 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668659 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668706 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668794 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668804 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.668907 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.669071 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.669303 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.669988 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.670036 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.679839 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.680688 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lcvpd"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.681369 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.682519 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.683056 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.683570 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.684062 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.684554 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.691130 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bmflj"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.691723 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.705918 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.710729 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.713390 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9pczc"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.725359 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.726800 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.726949 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727152 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727276 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727485 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727688 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727690 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727975 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.727985 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.728298 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.728539 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.728561 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.728680 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.728782 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-b96zd"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729611 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729757 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4xx\" (UniqueName: \"kubernetes.io/projected/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-kube-api-access-bt4xx\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729793 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46vtk\" (UniqueName: \"kubernetes.io/projected/c369d6de-8ee1-4aac-bf97-96d334c023e6-kube-api-access-46vtk\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729831 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729912 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-config\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729958 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-audit-dir\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.729983 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c369d6de-8ee1-4aac-bf97-96d334c023e6-serving-cert\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730008 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-node-pullsecrets\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730031 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-encryption-config\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730056 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730079 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730102 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-serving-cert\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-serving-cert\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730170 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-image-import-ca\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730202 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-config\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730229 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-etcd-serving-ca\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730255 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f81bbb9-980b-47b2-af98-ba0fde0896ef-audit-dir\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730282 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2czb\" (UniqueName: \"kubernetes.io/projected/0f81bbb9-980b-47b2-af98-ba0fde0896ef-kube-api-access-v2czb\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730309 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-config\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730330 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-etcd-client\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730354 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-images\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730467 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-audit\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730490 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-audit-policies\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-etcd-client\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-client-ca\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-encryption-config\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730588 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjqv\" (UniqueName: \"kubernetes.io/projected/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-kube-api-access-swjqv\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730614 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730645 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730724 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.730746 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.731041 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.731094 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.731165 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.731220 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.731323 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.732225 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.732294 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.732233 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.732813 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.733126 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.733762 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.734161 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.734183 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p5x48"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.734355 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.734667 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.735158 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bf8sq"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.735490 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bqg82"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.735929 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.736290 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.736514 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.736773 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.737605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.737837 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758303 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758396 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qfsxp"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758567 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758713 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758831 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758969 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759064 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759176 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759284 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759429 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759474 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759435 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759659 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759771 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759857 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758838 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759925 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.759979 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.760643 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.760761 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.760824 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.761398 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.758194 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.761563 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.761873 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.778754 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.762646 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.780254 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.792629 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.797161 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.797701 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.797871 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.797916 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.798038 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.798620 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.798647 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.798736 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.786979 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.798946 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799095 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799173 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799355 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799491 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799660 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799496 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.799759 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.800420 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.801110 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.801311 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.801316 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-db26k"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.801573 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.801579 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.801819 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.802264 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.803064 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.803238 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.803663 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.803684 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.805399 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wppsw"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.805991 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.806323 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.806870 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.807148 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.807656 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.813690 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.814574 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.815315 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.815564 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kbf24"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.815312 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.816024 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.817095 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.817131 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.817751 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.818467 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.818907 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.820626 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.821083 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.821698 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d77qs"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.822492 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.823165 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5kntx"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.824424 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6rzkl"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.825034 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5r75z"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.826172 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.829989 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.830652 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.830929 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831321 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da394811-8516-40db-b222-195e8e0c3e98-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831354 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-config\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831379 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f30605a2-7f73-4f06-8e41-6430e6402b7c-trusted-ca\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831410 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b56cd-0460-44c4-a898-b4f03938f92a-serving-cert\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-audit-dir\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da394811-8516-40db-b222-195e8e0c3e98-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831470 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831495 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-serving-cert\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831514 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hpv\" (UniqueName: \"kubernetes.io/projected/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-kube-api-access-s9hpv\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c369d6de-8ee1-4aac-bf97-96d334c023e6-serving-cert\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831569 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831587 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-trusted-ca-bundle\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831605 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-node-pullsecrets\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831622 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-encryption-config\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831642 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87np\" (UniqueName: \"kubernetes.io/projected/18774b0b-cedf-47b3-9113-5531e4c256f0-kube-api-access-v87np\") pod \"downloads-7954f5f757-b96zd\" (UID: \"18774b0b-cedf-47b3-9113-5531e4c256f0\") " pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831661 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-service-ca\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831680 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831717 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-serving-cert\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831737 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831755 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-proxy-tls\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831774 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-serving-cert\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831795 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-image-import-ca\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831815 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4v9z\" (UniqueName: \"kubernetes.io/projected/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-kube-api-access-s4v9z\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831841 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-config\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831862 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbtc\" (UniqueName: \"kubernetes.io/projected/a7104733-5864-4e3d-855b-1e28181bb201-kube-api-access-thbtc\") pod \"migrator-59844c95c7-wn5z2\" (UID: \"a7104733-5864-4e3d-855b-1e28181bb201\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831893 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-oauth-serving-cert\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831913 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-etcd-serving-ca\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f81bbb9-980b-47b2-af98-ba0fde0896ef-audit-dir\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831946 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a45af89-b379-407a-a34b-54ede9957c2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jq6x\" (UniqueName: \"kubernetes.io/projected/5a45af89-b379-407a-a34b-54ede9957c2d-kube-api-access-2jq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831981 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2czb\" (UniqueName: \"kubernetes.io/projected/0f81bbb9-980b-47b2-af98-ba0fde0896ef-kube-api-access-v2czb\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.831998 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-config\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-etcd-client\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832030 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832048 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832065 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-images\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832080 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-audit\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832114 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832132 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832149 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832185 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-config\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-audit-policies\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-etcd-client\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832234 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-client-ca\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832261 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-encryption-config\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-dir\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832295 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hkl"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832307 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swjqv\" (UniqueName: \"kubernetes.io/projected/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-kube-api-access-swjqv\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832329 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-oauth-config\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832349 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832375 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4xx\" (UniqueName: \"kubernetes.io/projected/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-kube-api-access-bt4xx\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832392 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832413 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46vtk\" (UniqueName: \"kubernetes.io/projected/c369d6de-8ee1-4aac-bf97-96d334c023e6-kube-api-access-46vtk\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832432 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832451 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da394811-8516-40db-b222-195e8e0c3e98-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832467 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kb7\" (UniqueName: \"kubernetes.io/projected/f30605a2-7f73-4f06-8e41-6430e6402b7c-kube-api-access-w7kb7\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832482 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhd7\" (UniqueName: \"kubernetes.io/projected/350b56cd-0460-44c4-a898-b4f03938f92a-kube-api-access-gxhd7\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6543f0bd-97f3-42f6-94c6-73241331b6ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g98rt\" (UID: \"6543f0bd-97f3-42f6-94c6-73241331b6ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832527 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832545 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-policies\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832561 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-console-config\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832575 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-client-ca\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832591 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f30605a2-7f73-4f06-8e41-6430e6402b7c-metrics-tls\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtpq\" (UniqueName: \"kubernetes.io/projected/9e7ec368-a244-4b1c-a313-987332c21d0e-kube-api-access-xjtpq\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.832627 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a45af89-b379-407a-a34b-54ede9957c2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.833751 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.834386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-config\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.835592 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-images\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.837605 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gs5\" (UniqueName: \"kubernetes.io/projected/6543f0bd-97f3-42f6-94c6-73241331b6ca-kube-api-access-d8gs5\") pod \"cluster-samples-operator-665b6dd947-g98rt\" (UID: \"6543f0bd-97f3-42f6-94c6-73241331b6ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.838680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.838900 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.842023 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-audit-policies\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.842112 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-client-ca\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.842591 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.843097 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-image-import-ca\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.843442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-config\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.843500 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-audit-dir\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.846840 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-audit\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.847152 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f81bbb9-980b-47b2-af98-ba0fde0896ef-audit-dir\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.847387 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.847432 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-encryption-config\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.847846 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.848459 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-etcd-client\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.848604 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f30605a2-7f73-4f06-8e41-6430e6402b7c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.848616 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-node-pullsecrets\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.849240 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-etcd-client\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.849650 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-etcd-serving-ca\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.849955 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f81bbb9-980b-47b2-af98-ba0fde0896ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.850943 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-serving-cert\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.851001 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.851792 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-serving-cert\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.853253 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c369d6de-8ee1-4aac-bf97-96d334c023e6-serving-cert\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.856826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f81bbb9-980b-47b2-af98-ba0fde0896ef-encryption-config\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.844509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-config\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.867560 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.866796 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bmflj"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.868117 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.869703 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.870287 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.871435 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p5x48"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.873734 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.878159 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.879402 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-b96zd"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.881515 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-65c4r"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.883538 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.883697 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.883710 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.896813 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.900858 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.900924 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.900940 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lcvpd"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.900954 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.900967 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.901000 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9pczc"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.901017 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bqg82"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.901034 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.901048 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qfsxp"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.911569 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bf8sq"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.913632 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.915208 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.916733 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.921711 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wppsw"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.922584 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.924474 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.926193 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.928810 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xz2ts"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.929644 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-db26k"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.931181 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.932907 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5r75z"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.934560 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.935863 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.936322 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qrxgd"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.937285 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.937529 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fd4g7"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.938407 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.938516 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hkl"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.939658 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d77qs"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.941028 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.942108 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qrxgd"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.943457 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-65c4r"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.948046 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xjd27"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.949408 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xjd27"] Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.949529 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950353 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-oauth-config\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950397 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7kb7\" (UniqueName: \"kubernetes.io/projected/f30605a2-7f73-4f06-8e41-6430e6402b7c-kube-api-access-w7kb7\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950507 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxhd7\" (UniqueName: \"kubernetes.io/projected/350b56cd-0460-44c4-a898-b4f03938f92a-kube-api-access-gxhd7\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da394811-8516-40db-b222-195e8e0c3e98-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6543f0bd-97f3-42f6-94c6-73241331b6ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g98rt\" (UID: \"6543f0bd-97f3-42f6-94c6-73241331b6ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950625 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-policies\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-console-config\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950715 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-client-ca\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950748 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f30605a2-7f73-4f06-8e41-6430e6402b7c-metrics-tls\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtpq\" (UniqueName: \"kubernetes.io/projected/9e7ec368-a244-4b1c-a313-987332c21d0e-kube-api-access-xjtpq\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a45af89-b379-407a-a34b-54ede9957c2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950835 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gs5\" (UniqueName: \"kubernetes.io/projected/6543f0bd-97f3-42f6-94c6-73241331b6ca-kube-api-access-d8gs5\") pod \"cluster-samples-operator-665b6dd947-g98rt\" (UID: \"6543f0bd-97f3-42f6-94c6-73241331b6ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950856 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f30605a2-7f73-4f06-8e41-6430e6402b7c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950916 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da394811-8516-40db-b222-195e8e0c3e98-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f30605a2-7f73-4f06-8e41-6430e6402b7c-trusted-ca\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.950975 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b56cd-0460-44c4-a898-b4f03938f92a-serving-cert\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951011 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951037 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-serving-cert\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951077 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da394811-8516-40db-b222-195e8e0c3e98-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951103 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951129 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9hpv\" (UniqueName: \"kubernetes.io/projected/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-kube-api-access-s9hpv\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951158 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951182 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-trusted-ca-bundle\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951210 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87np\" (UniqueName: \"kubernetes.io/projected/18774b0b-cedf-47b3-9113-5531e4c256f0-kube-api-access-v87np\") pod \"downloads-7954f5f757-b96zd\" (UID: \"18774b0b-cedf-47b3-9113-5531e4c256f0\") " pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-service-ca\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951256 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-proxy-tls\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4v9z\" (UniqueName: \"kubernetes.io/projected/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-kube-api-access-s4v9z\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951313 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thbtc\" (UniqueName: \"kubernetes.io/projected/a7104733-5864-4e3d-855b-1e28181bb201-kube-api-access-thbtc\") pod \"migrator-59844c95c7-wn5z2\" (UID: \"a7104733-5864-4e3d-855b-1e28181bb201\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951351 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-oauth-serving-cert\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951376 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a45af89-b379-407a-a34b-54ede9957c2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jq6x\" (UniqueName: \"kubernetes.io/projected/5a45af89-b379-407a-a34b-54ede9957c2d-kube-api-access-2jq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951458 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951481 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951504 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951528 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951572 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-config\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-dir\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.951734 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-dir\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.953301 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-console-config\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.954409 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-service-ca\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.954955 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-client-ca\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.956621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.956650 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-policies\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.956691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a45af89-b379-407a-a34b-54ede9957c2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.957022 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.957257 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.957478 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.957688 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-oauth-serving-cert\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.957944 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da394811-8516-40db-b222-195e8e0c3e98-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.958011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.958059 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-serving-cert\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.958709 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-trusted-ca-bundle\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.958755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-config\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.959006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.959502 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da394811-8516-40db-b222-195e8e0c3e98-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.959593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a45af89-b379-407a-a34b-54ede9957c2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.960403 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.961343 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b56cd-0460-44c4-a898-b4f03938f92a-serving-cert\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.962181 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.962262 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.963118 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.963589 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.964081 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.964385 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.971503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-oauth-config\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.975406 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 08:58:26 crc kubenswrapper[4720]: I0202 08:58:26.995616 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.007631 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6543f0bd-97f3-42f6-94c6-73241331b6ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g98rt\" (UID: \"6543f0bd-97f3-42f6-94c6-73241331b6ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.016034 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.036253 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.055613 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.076040 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.090780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f30605a2-7f73-4f06-8e41-6430e6402b7c-metrics-tls\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.100303 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.109259 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f30605a2-7f73-4f06-8e41-6430e6402b7c-trusted-ca\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.115410 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.136232 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.156783 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.169511 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-proxy-tls\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.196036 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.216141 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.235613 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.255177 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.277239 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.296322 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.316453 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.337805 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.357028 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.377158 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.396233 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.417090 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.435746 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.455100 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.475562 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.495733 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.515374 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.536806 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.557335 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.575566 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.596609 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.615546 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.636419 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.655724 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.675787 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.697200 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.716293 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.737392 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.756454 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.775653 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.796194 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.817456 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.834107 4720 request.go:700] Waited for 1.017803461s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.836220 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.856299 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.876696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.897057 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.915676 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.936957 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.956766 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.975319 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 08:58:27 crc kubenswrapper[4720]: I0202 08:58:27.995463 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.015534 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.035447 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.054634 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.077693 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.096658 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.116764 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.135617 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.156478 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.176960 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.197207 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.245002 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4xx\" (UniqueName: \"kubernetes.io/projected/a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce-kube-api-access-bt4xx\") pod \"apiserver-76f77b778f-5kntx\" (UID: \"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce\") " pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.247988 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.265566 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46vtk\" (UniqueName: \"kubernetes.io/projected/c369d6de-8ee1-4aac-bf97-96d334c023e6-kube-api-access-46vtk\") pod \"controller-manager-879f6c89f-6rzkl\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.276572 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2czb\" (UniqueName: \"kubernetes.io/projected/0f81bbb9-980b-47b2-af98-ba0fde0896ef-kube-api-access-v2czb\") pod \"apiserver-7bbb656c7d-6l4dl\" (UID: \"0f81bbb9-980b-47b2-af98-ba0fde0896ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.276647 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.298247 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.315583 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.337043 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.357688 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.376859 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.412202 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swjqv\" (UniqueName: \"kubernetes.io/projected/4b5090d5-9ae8-4af6-a6b7-a4e29b671585-kube-api-access-swjqv\") pod \"machine-api-operator-5694c8668f-xz2ts\" (UID: \"4b5090d5-9ae8-4af6-a6b7-a4e29b671585\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.415417 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.435633 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.476104 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.479456 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.484811 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5kntx"] Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.499972 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.515429 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.527943 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.536717 4720 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.556476 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.557318 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.577207 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.596605 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.616370 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.636383 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.657446 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.676868 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.698521 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.701335 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xz2ts"] Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.717176 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.735058 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.750272 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6rzkl"] Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.756288 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.774602 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.787660 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl"] Feb 02 08:58:28 crc kubenswrapper[4720]: W0202 08:58:28.800992 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f81bbb9_980b_47b2_af98_ba0fde0896ef.slice/crio-95ab8a8d1fed3599fa10337d752418a9e580cc1a0f08d019da3a548ddef2f443 WatchSource:0}: Error finding container 95ab8a8d1fed3599fa10337d752418a9e580cc1a0f08d019da3a548ddef2f443: Status 404 returned error can't find the container with id 95ab8a8d1fed3599fa10337d752418a9e580cc1a0f08d019da3a548ddef2f443 Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.822842 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxhd7\" (UniqueName: \"kubernetes.io/projected/350b56cd-0460-44c4-a898-b4f03938f92a-kube-api-access-gxhd7\") pod \"route-controller-manager-6576b87f9c-jrbg2\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.834112 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thbtc\" (UniqueName: \"kubernetes.io/projected/a7104733-5864-4e3d-855b-1e28181bb201-kube-api-access-thbtc\") pod \"migrator-59844c95c7-wn5z2\" (UID: \"a7104733-5864-4e3d-855b-1e28181bb201\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.834210 4720 request.go:700] Waited for 1.879854982s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.855612 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87np\" (UniqueName: \"kubernetes.io/projected/18774b0b-cedf-47b3-9113-5531e4c256f0-kube-api-access-v87np\") pod \"downloads-7954f5f757-b96zd\" (UID: \"18774b0b-cedf-47b3-9113-5531e4c256f0\") " pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.877374 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da394811-8516-40db-b222-195e8e0c3e98-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-plbzk\" (UID: \"da394811-8516-40db-b222-195e8e0c3e98\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.883784 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.894673 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4v9z\" (UniqueName: \"kubernetes.io/projected/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-kube-api-access-s4v9z\") pod \"oauth-openshift-558db77b4-lcvpd\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.905154 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.919670 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.921219 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9hpv\" (UniqueName: \"kubernetes.io/projected/a54ee1bd-0add-4c97-8a2e-af3963dadaf3-kube-api-access-s9hpv\") pod \"machine-config-controller-84d6567774-dctsk\" (UID: \"a54ee1bd-0add-4c97-8a2e-af3963dadaf3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.935947 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7kb7\" (UniqueName: \"kubernetes.io/projected/f30605a2-7f73-4f06-8e41-6430e6402b7c-kube-api-access-w7kb7\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.961899 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jq6x\" (UniqueName: \"kubernetes.io/projected/5a45af89-b379-407a-a34b-54ede9957c2d-kube-api-access-2jq6x\") pod \"openshift-apiserver-operator-796bbdcf4f-kmm7k\" (UID: \"5a45af89-b379-407a-a34b-54ede9957c2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.977014 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtpq\" (UniqueName: \"kubernetes.io/projected/9e7ec368-a244-4b1c-a313-987332c21d0e-kube-api-access-xjtpq\") pod \"console-f9d7485db-9pczc\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:28 crc kubenswrapper[4720]: I0202 08:58:28.997851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gs5\" (UniqueName: \"kubernetes.io/projected/6543f0bd-97f3-42f6-94c6-73241331b6ca-kube-api-access-d8gs5\") pod \"cluster-samples-operator-665b6dd947-g98rt\" (UID: \"6543f0bd-97f3-42f6-94c6-73241331b6ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.000307 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.010642 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.030152 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.031429 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f30605a2-7f73-4f06-8e41-6430e6402b7c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ssn4h\" (UID: \"f30605a2-7f73-4f06-8e41-6430e6402b7c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.053565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" event={"ID":"c369d6de-8ee1-4aac-bf97-96d334c023e6","Type":"ContainerStarted","Data":"eb4c3ca81a4ad245e50ca647e694cf3484e2a5f335522d9d5081622e756f6792"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.054760 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.055072 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" event={"ID":"c369d6de-8ee1-4aac-bf97-96d334c023e6","Type":"ContainerStarted","Data":"225656e48f7ecd3289ff7f79a8252b487d51f3c2bf73b45f5090bc2ce4c7ed4b"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.055477 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.057295 4720 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6rzkl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.057347 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" podUID="c369d6de-8ee1-4aac-bf97-96d334c023e6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.069446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" event={"ID":"4b5090d5-9ae8-4af6-a6b7-a4e29b671585","Type":"ContainerStarted","Data":"5289477ab7aede0c8371988b0673e8de63aa872414b0f6dfafc38de5ee8944ff"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.069736 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" event={"ID":"4b5090d5-9ae8-4af6-a6b7-a4e29b671585","Type":"ContainerStarted","Data":"cec2f21237ae39d2ac304e0e8eb17e97220649dab626c251b0154a2e9a93d2b8"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.073510 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" event={"ID":"0f81bbb9-980b-47b2-af98-ba0fde0896ef","Type":"ContainerStarted","Data":"95ab8a8d1fed3599fa10337d752418a9e580cc1a0f08d019da3a548ddef2f443"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.075645 4720 generic.go:334] "Generic (PLEG): container finished" podID="a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce" containerID="0cce52a932c810b42fa464b1da37c3c0cc623bb03cb93a23823a3f9a632d1445" exitCode=0 Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.075765 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" event={"ID":"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce","Type":"ContainerDied","Data":"0cce52a932c810b42fa464b1da37c3c0cc623bb03cb93a23823a3f9a632d1445"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.076337 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" event={"ID":"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce","Type":"ContainerStarted","Data":"b8be46c6d6a622e56fd65d07129225da0fe01b9519e14bf9eb21ad91d7881184"} Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.085983 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf9lt\" (UniqueName: \"kubernetes.io/projected/abfabda3-e980-4d64-af7e-2c3f55142af6-kube-api-access-xf9lt\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086338 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-trusted-ca\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086435 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kx4\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-kube-api-access-75kx4\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086540 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-config\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086636 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-certificates\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-bound-sa-token\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086818 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4f92bb0-73fe-45d5-870b-a63931a4ef12-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.086933 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlw97\" (UniqueName: \"kubernetes.io/projected/62c76947-3536-4d11-b06e-fa9fbdc2d55a-kube-api-access-tlw97\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087026 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087097 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/032de0ec-0597-4308-bafc-071b70bbc9cd-auth-proxy-config\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087193 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087279 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-service-ca-bundle\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087379 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62c76947-3536-4d11-b06e-fa9fbdc2d55a-serving-cert\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4f92bb0-73fe-45d5-870b-a63931a4ef12-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087578 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bae959a-c36d-4986-80e0-dad2f0861334-serving-cert\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087676 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/032de0ec-0597-4308-bafc-071b70bbc9cd-machine-approver-tls\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-tls\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.087928 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdth\" (UniqueName: \"kubernetes.io/projected/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-kube-api-access-8qdth\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088047 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfabda3-e980-4d64-af7e-2c3f55142af6-config\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088296 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f370f1-d216-4b7b-86e6-f18ef12e9843-config\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088397 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abfabda3-e980-4d64-af7e-2c3f55142af6-trusted-ca\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088504 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfabda3-e980-4d64-af7e-2c3f55142af6-serving-cert\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088619 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f370f1-d216-4b7b-86e6-f18ef12e9843-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.088679 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:29.588655666 +0000 UTC m=+143.444281222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.088849 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032de0ec-0597-4308-bafc-071b70bbc9cd-config\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.089040 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f370f1-d216-4b7b-86e6-f18ef12e9843-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.089248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrk5\" (UniqueName: \"kubernetes.io/projected/032de0ec-0597-4308-bafc-071b70bbc9cd-kube-api-access-lwrk5\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.089342 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.089626 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn8sl\" (UniqueName: \"kubernetes.io/projected/1bae959a-c36d-4986-80e0-dad2f0861334-kube-api-access-rn8sl\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.089846 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/62c76947-3536-4d11-b06e-fa9fbdc2d55a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.122284 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.122874 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.129378 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191512 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191661 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jlw\" (UniqueName: \"kubernetes.io/projected/c1b41d56-2810-41ac-b2d6-b81b14389e44-kube-api-access-65jlw\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frtrl\" (UniqueName: \"kubernetes.io/projected/fa3da67d-1641-4c84-9d0e-51788244f887-kube-api-access-frtrl\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191730 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-stats-auth\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b52220-efb0-4101-bb96-68169372693b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191786 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f370f1-d216-4b7b-86e6-f18ef12e9843-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191804 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cxj4\" (UniqueName: \"kubernetes.io/projected/82d280cc-a79e-42ef-a923-35a2faa20a90-kube-api-access-8cxj4\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191829 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032de0ec-0597-4308-bafc-071b70bbc9cd-config\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f370f1-d216-4b7b-86e6-f18ef12e9843-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191865 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191908 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc2c97f-7857-4792-a92b-88c1415b652f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191940 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42mlr\" (UniqueName: \"kubernetes.io/projected/80b52220-efb0-4101-bb96-68169372693b-kube-api-access-42mlr\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5tn\" (UniqueName: \"kubernetes.io/projected/7ed9a100-019b-4f35-ab4c-187b087a3e99-kube-api-access-2j5tn\") pod \"control-plane-machine-set-operator-78cbb6b69f-tb5vz\" (UID: \"7ed9a100-019b-4f35-ab4c-187b087a3e99\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.191998 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5164c8ec-dcff-4756-9459-d9c1f01a1e85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192014 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-metrics-certs\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192031 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95mb\" (UniqueName: \"kubernetes.io/projected/a8765666-2f9b-4523-bbcd-5087b9ae8416-kube-api-access-z95mb\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192054 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbj4\" (UniqueName: \"kubernetes.io/projected/38b584e2-923b-49f3-9681-a030512550d8-kube-api-access-9hbj4\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192263 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-config\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192280 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b41d56-2810-41ac-b2d6-b81b14389e44-service-ca-bundle\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192297 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b52220-efb0-4101-bb96-68169372693b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192314 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf9lt\" (UniqueName: \"kubernetes.io/projected/abfabda3-e980-4d64-af7e-2c3f55142af6-kube-api-access-xf9lt\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-trusted-ca\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192365 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnv8s\" (UniqueName: \"kubernetes.io/projected/1b688028-fc1d-4943-99e8-101d0ab88506-kube-api-access-nnv8s\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192381 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc2c97f-7857-4792-a92b-88c1415b652f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192407 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ed9a100-019b-4f35-ab4c-187b087a3e99-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tb5vz\" (UID: \"7ed9a100-019b-4f35-ab4c-187b087a3e99\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192433 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-certificates\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4f92bb0-73fe-45d5-870b-a63931a4ef12-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192467 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlw97\" (UniqueName: \"kubernetes.io/projected/62c76947-3536-4d11-b06e-fa9fbdc2d55a-kube-api-access-tlw97\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhs2\" (UniqueName: \"kubernetes.io/projected/b316d3e2-e731-49e3-8bad-43c1e276dd43-kube-api-access-gzhs2\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvvg\" (UniqueName: \"kubernetes.io/projected/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-kube-api-access-tmvvg\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192536 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62c76947-3536-4d11-b06e-fa9fbdc2d55a-serving-cert\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192569 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-ca\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192586 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qbq\" (UniqueName: \"kubernetes.io/projected/1de05e78-bbe5-4a95-85f7-323e2e1c76f3-kube-api-access-j4qbq\") pod \"dns-operator-744455d44c-5r75z\" (UID: \"1de05e78-bbe5-4a95-85f7-323e2e1c76f3\") " pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192601 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8765666-2f9b-4523-bbcd-5087b9ae8416-node-bootstrap-token\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192616 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4f92bb0-73fe-45d5-870b-a63931a4ef12-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192633 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37615bea-3d49-45d6-b190-450e2e078977-config-volume\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/032de0ec-0597-4308-bafc-071b70bbc9cd-machine-approver-tls\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5164c8ec-dcff-4756-9459-d9c1f01a1e85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192700 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82d280cc-a79e-42ef-a923-35a2faa20a90-apiservice-cert\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192715 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vzc\" (UniqueName: \"kubernetes.io/projected/635f5723-3a45-43b1-9745-2261943f0de1-kube-api-access-z4vzc\") pod \"ingress-canary-xjd27\" (UID: \"635f5723-3a45-43b1-9745-2261943f0de1\") " pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192733 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-tls\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192747 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8765666-2f9b-4523-bbcd-5087b9ae8416-certs\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192767 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b584e2-923b-49f3-9681-a030512550d8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-default-certificate\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192803 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-socket-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192822 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdth\" (UniqueName: \"kubernetes.io/projected/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-kube-api-access-8qdth\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192837 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97phb\" (UniqueName: \"kubernetes.io/projected/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-kube-api-access-97phb\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192853 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1b688028-fc1d-4943-99e8-101d0ab88506-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.192870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b584e2-923b-49f3-9681-a030512550d8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193005 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0347bb51-bfe3-44a6-be39-3c4f0eb8d91c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lhrrv\" (UID: \"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193067 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abfabda3-e980-4d64-af7e-2c3f55142af6-trusted-ca\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193087 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-registration-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193104 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3465db32-883c-4dbd-8921-386c5f9de67a-signing-cabundle\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193122 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhj2v\" (UniqueName: \"kubernetes.io/projected/b77d8e1c-b943-452f-b1b0-21ac07685158-kube-api-access-fhj2v\") pod \"multus-admission-controller-857f4d67dd-db26k\" (UID: \"b77d8e1c-b943-452f-b1b0-21ac07685158\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193139 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfabda3-e980-4d64-af7e-2c3f55142af6-serving-cert\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-csi-data-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193183 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82d280cc-a79e-42ef-a923-35a2faa20a90-tmpfs\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193199 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5rkj\" (UniqueName: \"kubernetes.io/projected/0347bb51-bfe3-44a6-be39-3c4f0eb8d91c-kube-api-access-t5rkj\") pod \"package-server-manager-789f6589d5-lhrrv\" (UID: \"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193243 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qz5\" (UniqueName: \"kubernetes.io/projected/37615bea-3d49-45d6-b190-450e2e078977-kube-api-access-64qz5\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193261 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tvg\" (UniqueName: \"kubernetes.io/projected/5164c8ec-dcff-4756-9459-d9c1f01a1e85-kube-api-access-w6tvg\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193294 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa3da67d-1641-4c84-9d0e-51788244f887-config-volume\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193332 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwrk5\" (UniqueName: \"kubernetes.io/projected/032de0ec-0597-4308-bafc-071b70bbc9cd-kube-api-access-lwrk5\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193354 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193373 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b316d3e2-e731-49e3-8bad-43c1e276dd43-serving-cert\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193393 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b6705e-b780-4630-8d02-68d663d146cd-serving-cert\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193446 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn8sl\" (UniqueName: \"kubernetes.io/projected/1bae959a-c36d-4986-80e0-dad2f0861334-kube-api-access-rn8sl\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193461 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/62c76947-3536-4d11-b06e-fa9fbdc2d55a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193506 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-plugins-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193562 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1de05e78-bbe5-4a95-85f7-323e2e1c76f3-metrics-tls\") pod \"dns-operator-744455d44c-5r75z\" (UID: \"1de05e78-bbe5-4a95-85f7-323e2e1c76f3\") " pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193627 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdq2\" (UniqueName: \"kubernetes.io/projected/b5843997-ba37-4e10-ae40-67c335d91321-kube-api-access-rtdq2\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193655 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-bound-sa-token\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193672 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kx4\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-kube-api-access-75kx4\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193689 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-config\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193718 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193755 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/032de0ec-0597-4308-bafc-071b70bbc9cd-auth-proxy-config\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193782 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-client\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193800 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa3da67d-1641-4c84-9d0e-51788244f887-metrics-tls\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193838 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-service-ca-bundle\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193908 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bae959a-c36d-4986-80e0-dad2f0861334-serving-cert\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193939 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-proxy-tls\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b77d8e1c-b943-452f-b1b0-21ac07685158-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-db26k\" (UID: \"b77d8e1c-b943-452f-b1b0-21ac07685158\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.193972 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4q9b\" (UniqueName: \"kubernetes.io/projected/47b6705e-b780-4630-8d02-68d663d146cd-kube-api-access-v4q9b\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194013 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-profile-collector-cert\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194028 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1b688028-fc1d-4943-99e8-101d0ab88506-srv-cert\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194045 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-images\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194070 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxzv\" (UniqueName: \"kubernetes.io/projected/3465db32-883c-4dbd-8921-386c5f9de67a-kube-api-access-fhxzv\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194107 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfabda3-e980-4d64-af7e-2c3f55142af6-config\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194132 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3465db32-883c-4dbd-8921-386c5f9de67a-signing-key\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194149 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6705e-b780-4630-8d02-68d663d146cd-config\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f370f1-d216-4b7b-86e6-f18ef12e9843-config\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194224 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5164c8ec-dcff-4756-9459-d9c1f01a1e85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194241 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2c97f-7857-4792-a92b-88c1415b652f-config\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194267 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37615bea-3d49-45d6-b190-450e2e078977-secret-volume\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194281 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/635f5723-3a45-43b1-9745-2261943f0de1-cert\") pod \"ingress-canary-xjd27\" (UID: \"635f5723-3a45-43b1-9745-2261943f0de1\") " pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194297 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-srv-cert\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194397 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82d280cc-a79e-42ef-a923-35a2faa20a90-webhook-cert\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.194421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-mountpoint-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.194546 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:29.694524942 +0000 UTC m=+143.550150498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.202851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/032de0ec-0597-4308-bafc-071b70bbc9cd-auth-proxy-config\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.202908 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032de0ec-0597-4308-bafc-071b70bbc9cd-config\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.203855 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.210149 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f370f1-d216-4b7b-86e6-f18ef12e9843-config\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.215082 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abfabda3-e980-4d64-af7e-2c3f55142af6-trusted-ca\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.215281 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-config\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.216640 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bae959a-c36d-4986-80e0-dad2f0861334-service-ca-bundle\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.217120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62c76947-3536-4d11-b06e-fa9fbdc2d55a-serving-cert\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.217750 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/62c76947-3536-4d11-b06e-fa9fbdc2d55a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.218079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.220219 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4f92bb0-73fe-45d5-870b-a63931a4ef12-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.220917 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfabda3-e980-4d64-af7e-2c3f55142af6-config\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.220995 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/032de0ec-0597-4308-bafc-071b70bbc9cd-machine-approver-tls\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.223544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-certificates\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.225489 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-trusted-ca\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.225504 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f370f1-d216-4b7b-86e6-f18ef12e9843-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.227445 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfabda3-e980-4d64-af7e-2c3f55142af6-serving-cert\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.232304 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4f92bb0-73fe-45d5-870b-a63931a4ef12-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.237728 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.240403 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlw97\" (UniqueName: \"kubernetes.io/projected/62c76947-3536-4d11-b06e-fa9fbdc2d55a-kube-api-access-tlw97\") pod \"openshift-config-operator-7777fb866f-p5x48\" (UID: \"62c76947-3536-4d11-b06e-fa9fbdc2d55a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.241465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bae959a-c36d-4986-80e0-dad2f0861334-serving-cert\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.241707 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-tls\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.262613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kx4\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-kube-api-access-75kx4\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.274090 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwrk5\" (UniqueName: \"kubernetes.io/projected/032de0ec-0597-4308-bafc-071b70bbc9cd-kube-api-access-lwrk5\") pod \"machine-approver-56656f9798-d9xbv\" (UID: \"032de0ec-0597-4308-bafc-071b70bbc9cd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.294318 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn8sl\" (UniqueName: \"kubernetes.io/projected/1bae959a-c36d-4986-80e0-dad2f0861334-kube-api-access-rn8sl\") pod \"authentication-operator-69f744f599-bmflj\" (UID: \"1bae959a-c36d-4986-80e0-dad2f0861334\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296083 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1b688028-fc1d-4943-99e8-101d0ab88506-srv-cert\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296140 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-images\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296204 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhxzv\" (UniqueName: \"kubernetes.io/projected/3465db32-883c-4dbd-8921-386c5f9de67a-kube-api-access-fhxzv\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296232 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296255 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3465db32-883c-4dbd-8921-386c5f9de67a-signing-key\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296272 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6705e-b780-4630-8d02-68d663d146cd-config\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296290 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5164c8ec-dcff-4756-9459-d9c1f01a1e85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296309 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2c97f-7857-4792-a92b-88c1415b652f-config\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296327 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37615bea-3d49-45d6-b190-450e2e078977-secret-volume\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296356 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/635f5723-3a45-43b1-9745-2261943f0de1-cert\") pod \"ingress-canary-xjd27\" (UID: \"635f5723-3a45-43b1-9745-2261943f0de1\") " pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296375 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-srv-cert\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82d280cc-a79e-42ef-a923-35a2faa20a90-webhook-cert\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296446 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-mountpoint-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.296479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jlw\" (UniqueName: \"kubernetes.io/projected/c1b41d56-2810-41ac-b2d6-b81b14389e44-kube-api-access-65jlw\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298272 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frtrl\" (UniqueName: \"kubernetes.io/projected/fa3da67d-1641-4c84-9d0e-51788244f887-kube-api-access-frtrl\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298307 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-stats-auth\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298328 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b52220-efb0-4101-bb96-68169372693b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298374 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cxj4\" (UniqueName: \"kubernetes.io/projected/82d280cc-a79e-42ef-a923-35a2faa20a90-kube-api-access-8cxj4\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298411 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc2c97f-7857-4792-a92b-88c1415b652f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42mlr\" (UniqueName: \"kubernetes.io/projected/80b52220-efb0-4101-bb96-68169372693b-kube-api-access-42mlr\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298470 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j5tn\" (UniqueName: \"kubernetes.io/projected/7ed9a100-019b-4f35-ab4c-187b087a3e99-kube-api-access-2j5tn\") pod \"control-plane-machine-set-operator-78cbb6b69f-tb5vz\" (UID: \"7ed9a100-019b-4f35-ab4c-187b087a3e99\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-metrics-certs\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298505 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95mb\" (UniqueName: \"kubernetes.io/projected/a8765666-2f9b-4523-bbcd-5087b9ae8416-kube-api-access-z95mb\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298531 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5164c8ec-dcff-4756-9459-d9c1f01a1e85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298555 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b41d56-2810-41ac-b2d6-b81b14389e44-service-ca-bundle\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b52220-efb0-4101-bb96-68169372693b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298597 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbj4\" (UniqueName: \"kubernetes.io/projected/38b584e2-923b-49f3-9681-a030512550d8-kube-api-access-9hbj4\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-config\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298642 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnv8s\" (UniqueName: \"kubernetes.io/projected/1b688028-fc1d-4943-99e8-101d0ab88506-kube-api-access-nnv8s\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298657 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc2c97f-7857-4792-a92b-88c1415b652f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298677 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ed9a100-019b-4f35-ab4c-187b087a3e99-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tb5vz\" (UID: \"7ed9a100-019b-4f35-ab4c-187b087a3e99\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298744 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhs2\" (UniqueName: \"kubernetes.io/projected/b316d3e2-e731-49e3-8bad-43c1e276dd43-kube-api-access-gzhs2\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298767 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvvg\" (UniqueName: \"kubernetes.io/projected/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-kube-api-access-tmvvg\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-ca\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298802 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qbq\" (UniqueName: \"kubernetes.io/projected/1de05e78-bbe5-4a95-85f7-323e2e1c76f3-kube-api-access-j4qbq\") pod \"dns-operator-744455d44c-5r75z\" (UID: \"1de05e78-bbe5-4a95-85f7-323e2e1c76f3\") " pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298820 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8765666-2f9b-4523-bbcd-5087b9ae8416-node-bootstrap-token\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37615bea-3d49-45d6-b190-450e2e078977-config-volume\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298862 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5164c8ec-dcff-4756-9459-d9c1f01a1e85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298904 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82d280cc-a79e-42ef-a923-35a2faa20a90-apiservice-cert\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298928 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vzc\" (UniqueName: \"kubernetes.io/projected/635f5723-3a45-43b1-9745-2261943f0de1-kube-api-access-z4vzc\") pod \"ingress-canary-xjd27\" (UID: \"635f5723-3a45-43b1-9745-2261943f0de1\") " pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298954 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8765666-2f9b-4523-bbcd-5087b9ae8416-certs\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b584e2-923b-49f3-9681-a030512550d8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.298998 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-default-certificate\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-socket-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299041 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97phb\" (UniqueName: \"kubernetes.io/projected/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-kube-api-access-97phb\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299060 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1b688028-fc1d-4943-99e8-101d0ab88506-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b584e2-923b-49f3-9681-a030512550d8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299095 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0347bb51-bfe3-44a6-be39-3c4f0eb8d91c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lhrrv\" (UID: \"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-registration-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299130 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3465db32-883c-4dbd-8921-386c5f9de67a-signing-cabundle\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhj2v\" (UniqueName: \"kubernetes.io/projected/b77d8e1c-b943-452f-b1b0-21ac07685158-kube-api-access-fhj2v\") pod \"multus-admission-controller-857f4d67dd-db26k\" (UID: \"b77d8e1c-b943-452f-b1b0-21ac07685158\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-csi-data-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299191 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82d280cc-a79e-42ef-a923-35a2faa20a90-tmpfs\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299208 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5rkj\" (UniqueName: \"kubernetes.io/projected/0347bb51-bfe3-44a6-be39-3c4f0eb8d91c-kube-api-access-t5rkj\") pod \"package-server-manager-789f6589d5-lhrrv\" (UID: \"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qz5\" (UniqueName: \"kubernetes.io/projected/37615bea-3d49-45d6-b190-450e2e078977-kube-api-access-64qz5\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299243 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6tvg\" (UniqueName: \"kubernetes.io/projected/5164c8ec-dcff-4756-9459-d9c1f01a1e85-kube-api-access-w6tvg\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299242 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5164c8ec-dcff-4756-9459-d9c1f01a1e85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299265 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa3da67d-1641-4c84-9d0e-51788244f887-config-volume\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299287 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b316d3e2-e731-49e3-8bad-43c1e276dd43-serving-cert\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299304 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b6705e-b780-4630-8d02-68d663d146cd-serving-cert\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299345 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-plugins-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1de05e78-bbe5-4a95-85f7-323e2e1c76f3-metrics-tls\") pod \"dns-operator-744455d44c-5r75z\" (UID: \"1de05e78-bbe5-4a95-85f7-323e2e1c76f3\") " pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299382 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdq2\" (UniqueName: \"kubernetes.io/projected/b5843997-ba37-4e10-ae40-67c335d91321-kube-api-access-rtdq2\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299414 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-client\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa3da67d-1641-4c84-9d0e-51788244f887-metrics-tls\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b77d8e1c-b943-452f-b1b0-21ac07685158-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-db26k\" (UID: \"b77d8e1c-b943-452f-b1b0-21ac07685158\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299489 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4q9b\" (UniqueName: \"kubernetes.io/projected/47b6705e-b780-4630-8d02-68d663d146cd-kube-api-access-v4q9b\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299506 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-proxy-tls\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.299525 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-profile-collector-cert\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.299570 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:29.799554745 +0000 UTC m=+143.655180301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.300690 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fc2c97f-7857-4792-a92b-88c1415b652f-config\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.300943 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.301011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-mountpoint-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.301288 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b6705e-b780-4630-8d02-68d663d146cd-config\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.303311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-config\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.303791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b584e2-923b-49f3-9681-a030512550d8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.303959 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82d280cc-a79e-42ef-a923-35a2faa20a90-webhook-cert\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.306989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-images\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.307269 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-ca\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.307697 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ed9a100-019b-4f35-ab4c-187b087a3e99-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tb5vz\" (UID: \"7ed9a100-019b-4f35-ab4c-187b087a3e99\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.307784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b52220-efb0-4101-bb96-68169372693b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.310343 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.311681 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-srv-cert\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.311717 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b52220-efb0-4101-bb96-68169372693b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8765666-2f9b-4523-bbcd-5087b9ae8416-node-bootstrap-token\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312237 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b41d56-2810-41ac-b2d6-b81b14389e44-service-ca-bundle\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312419 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-plugins-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312456 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa3da67d-1641-4c84-9d0e-51788244f887-config-volume\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312532 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-registration-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312604 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-socket-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.312711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-metrics-certs\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.313085 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf9lt\" (UniqueName: \"kubernetes.io/projected/abfabda3-e980-4d64-af7e-2c3f55142af6-kube-api-access-xf9lt\") pod \"console-operator-58897d9998-bf8sq\" (UID: \"abfabda3-e980-4d64-af7e-2c3f55142af6\") " pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.314136 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5843997-ba37-4e10-ae40-67c335d91321-csi-data-dir\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.315314 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3465db32-883c-4dbd-8921-386c5f9de67a-signing-cabundle\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.315367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-proxy-tls\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.316834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b316d3e2-e731-49e3-8bad-43c1e276dd43-serving-cert\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.317647 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1de05e78-bbe5-4a95-85f7-323e2e1c76f3-metrics-tls\") pod \"dns-operator-744455d44c-5r75z\" (UID: \"1de05e78-bbe5-4a95-85f7-323e2e1c76f3\") " pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.318136 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5164c8ec-dcff-4756-9459-d9c1f01a1e85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.319237 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/635f5723-3a45-43b1-9745-2261943f0de1-cert\") pod \"ingress-canary-xjd27\" (UID: \"635f5723-3a45-43b1-9745-2261943f0de1\") " pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.324148 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-profile-collector-cert\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.324262 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b77d8e1c-b943-452f-b1b0-21ac07685158-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-db26k\" (UID: \"b77d8e1c-b943-452f-b1b0-21ac07685158\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.324618 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-stats-auth\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.328875 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8765666-2f9b-4523-bbcd-5087b9ae8416-certs\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.329453 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/82d280cc-a79e-42ef-a923-35a2faa20a90-tmpfs\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.330775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37615bea-3d49-45d6-b190-450e2e078977-config-volume\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.333184 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37615bea-3d49-45d6-b190-450e2e078977-secret-volume\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.334013 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b6705e-b780-4630-8d02-68d663d146cd-serving-cert\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.344173 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.348726 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82d280cc-a79e-42ef-a923-35a2faa20a90-apiservice-cert\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.349049 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b584e2-923b-49f3-9681-a030512550d8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.349108 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b316d3e2-e731-49e3-8bad-43c1e276dd43-etcd-client\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.349453 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fc2c97f-7857-4792-a92b-88c1415b652f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.349469 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1b688028-fc1d-4943-99e8-101d0ab88506-srv-cert\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.349549 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3465db32-883c-4dbd-8921-386c5f9de67a-signing-key\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.349897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1b688028-fc1d-4943-99e8-101d0ab88506-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.350224 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa3da67d-1641-4c84-9d0e-51788244f887-metrics-tls\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.352494 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0347bb51-bfe3-44a6-be39-3c4f0eb8d91c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lhrrv\" (UID: \"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.353065 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f370f1-d216-4b7b-86e6-f18ef12e9843-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hgccg\" (UID: \"f4f370f1-d216-4b7b-86e6-f18ef12e9843\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.353164 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdth\" (UniqueName: \"kubernetes.io/projected/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-kube-api-access-8qdth\") pod \"marketplace-operator-79b997595-qfsxp\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.354176 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c1b41d56-2810-41ac-b2d6-b81b14389e44-default-certificate\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.375277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-bound-sa-token\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.392435 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.400425 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.403035 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.403351 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:29.903302144 +0000 UTC m=+143.758927700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.409131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.409775 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:29.909757947 +0000 UTC m=+143.765383503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.416321 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhxzv\" (UniqueName: \"kubernetes.io/projected/3465db32-883c-4dbd-8921-386c5f9de67a-kube-api-access-fhxzv\") pod \"service-ca-9c57cc56f-wppsw\" (UID: \"3465db32-883c-4dbd-8921-386c5f9de67a\") " pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.440912 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jlw\" (UniqueName: \"kubernetes.io/projected/c1b41d56-2810-41ac-b2d6-b81b14389e44-kube-api-access-65jlw\") pod \"router-default-5444994796-kbf24\" (UID: \"c1b41d56-2810-41ac-b2d6-b81b14389e44\") " pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.449454 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lcvpd"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.452058 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.454298 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.456851 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9pczc"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.467701 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frtrl\" (UniqueName: \"kubernetes.io/projected/fa3da67d-1641-4c84-9d0e-51788244f887-kube-api-access-frtrl\") pod \"dns-default-qrxgd\" (UID: \"fa3da67d-1641-4c84-9d0e-51788244f887\") " pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.477613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvvg\" (UniqueName: \"kubernetes.io/projected/d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b-kube-api-access-tmvvg\") pod \"catalog-operator-68c6474976-dwl6l\" (UID: \"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.486948 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.490405 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-b96zd"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.498901 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.506545 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.501662 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.507755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbj4\" (UniqueName: \"kubernetes.io/projected/38b584e2-923b-49f3-9681-a030512550d8-kube-api-access-9hbj4\") pod \"kube-storage-version-migrator-operator-b67b599dd-dqfpz\" (UID: \"38b584e2-923b-49f3-9681-a030512550d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.514269 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.514862 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.014846952 +0000 UTC m=+143.870472498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.532030 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnv8s\" (UniqueName: \"kubernetes.io/projected/1b688028-fc1d-4943-99e8-101d0ab88506-kube-api-access-nnv8s\") pod \"olm-operator-6b444d44fb-m5lr9\" (UID: \"1b688028-fc1d-4943-99e8-101d0ab88506\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.543840 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.552427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhs2\" (UniqueName: \"kubernetes.io/projected/b316d3e2-e731-49e3-8bad-43c1e276dd43-kube-api-access-gzhs2\") pod \"etcd-operator-b45778765-j7hkl\" (UID: \"b316d3e2-e731-49e3-8bad-43c1e276dd43\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.552547 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6tvg\" (UniqueName: \"kubernetes.io/projected/5164c8ec-dcff-4756-9459-d9c1f01a1e85-kube-api-access-w6tvg\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.561864 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.571824 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qbq\" (UniqueName: \"kubernetes.io/projected/1de05e78-bbe5-4a95-85f7-323e2e1c76f3-kube-api-access-j4qbq\") pod \"dns-operator-744455d44c-5r75z\" (UID: \"1de05e78-bbe5-4a95-85f7-323e2e1c76f3\") " pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.579094 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.583790 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.591688 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.607190 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc2c97f-7857-4792-a92b-88c1415b652f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8ztlm\" (UID: \"7fc2c97f-7857-4792-a92b-88c1415b652f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.612198 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42mlr\" (UniqueName: \"kubernetes.io/projected/80b52220-efb0-4101-bb96-68169372693b-kube-api-access-42mlr\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpswr\" (UID: \"80b52220-efb0-4101-bb96-68169372693b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.616957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.617449 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.11742973 +0000 UTC m=+143.973055286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.627560 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.634750 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.644303 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j5tn\" (UniqueName: \"kubernetes.io/projected/7ed9a100-019b-4f35-ab4c-187b087a3e99-kube-api-access-2j5tn\") pod \"control-plane-machine-set-operator-78cbb6b69f-tb5vz\" (UID: \"7ed9a100-019b-4f35-ab4c-187b087a3e99\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.673328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cxj4\" (UniqueName: \"kubernetes.io/projected/82d280cc-a79e-42ef-a923-35a2faa20a90-kube-api-access-8cxj4\") pod \"packageserver-d55dfcdfc-rn4r6\" (UID: \"82d280cc-a79e-42ef-a923-35a2faa20a90\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.685271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95mb\" (UniqueName: \"kubernetes.io/projected/a8765666-2f9b-4523-bbcd-5087b9ae8416-kube-api-access-z95mb\") pod \"machine-config-server-fd4g7\" (UID: \"a8765666-2f9b-4523-bbcd-5087b9ae8416\") " pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.713982 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4q9b\" (UniqueName: \"kubernetes.io/projected/47b6705e-b780-4630-8d02-68d663d146cd-kube-api-access-v4q9b\") pod \"service-ca-operator-777779d784-d77qs\" (UID: \"47b6705e-b780-4630-8d02-68d663d146cd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.718822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.719353 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.219314319 +0000 UTC m=+144.074939875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.720791 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.721258 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.22123471 +0000 UTC m=+144.076860266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.729580 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97phb\" (UniqueName: \"kubernetes.io/projected/b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd-kube-api-access-97phb\") pod \"machine-config-operator-74547568cd-mnk8c\" (UID: \"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.731074 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p5x48"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.739391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5164c8ec-dcff-4756-9459-d9c1f01a1e85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xf5vc\" (UID: \"5164c8ec-dcff-4756-9459-d9c1f01a1e85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.739826 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.740591 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.751583 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.764497 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vzc\" (UniqueName: \"kubernetes.io/projected/635f5723-3a45-43b1-9745-2261943f0de1-kube-api-access-z4vzc\") pod \"ingress-canary-xjd27\" (UID: \"635f5723-3a45-43b1-9745-2261943f0de1\") " pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.767619 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bf8sq"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.769756 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.775414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhj2v\" (UniqueName: \"kubernetes.io/projected/b77d8e1c-b943-452f-b1b0-21ac07685158-kube-api-access-fhj2v\") pod \"multus-admission-controller-857f4d67dd-db26k\" (UID: \"b77d8e1c-b943-452f-b1b0-21ac07685158\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.783909 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qfsxp"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.786520 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.797427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdq2\" (UniqueName: \"kubernetes.io/projected/b5843997-ba37-4e10-ae40-67c335d91321-kube-api-access-rtdq2\") pod \"csi-hostpathplugin-65c4r\" (UID: \"b5843997-ba37-4e10-ae40-67c335d91321\") " pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: W0202 08:58:29.811806 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62c76947_3536_4d11_b06e_fa9fbdc2d55a.slice/crio-c421bf7ec78805c9a5a6de58267e20c3723d3f082aae85b15faf6d6dbddc3b11 WatchSource:0}: Error finding container c421bf7ec78805c9a5a6de58267e20c3723d3f082aae85b15faf6d6dbddc3b11: Status 404 returned error can't find the container with id c421bf7ec78805c9a5a6de58267e20c3723d3f082aae85b15faf6d6dbddc3b11 Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.815770 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5rkj\" (UniqueName: \"kubernetes.io/projected/0347bb51-bfe3-44a6-be39-3c4f0eb8d91c-kube-api-access-t5rkj\") pod \"package-server-manager-789f6589d5-lhrrv\" (UID: \"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.822018 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.822328 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.822678 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.322658867 +0000 UTC m=+144.178284423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.822743 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.823391 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.323383556 +0000 UTC m=+144.179009112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.824872 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.831531 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.836562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qz5\" (UniqueName: \"kubernetes.io/projected/37615bea-3d49-45d6-b190-450e2e078977-kube-api-access-64qz5\") pod \"collect-profiles-29500365-jllzp\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.850246 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.864363 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.874138 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.885189 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.896410 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.917633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" Feb 02 08:58:29 crc kubenswrapper[4720]: W0202 08:58:29.919528 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9d36a5f_dbbe_46cd_9139_548ee7a5ea0b.slice/crio-973c4f4deb2edc4dca9b5e652d38c2bd6b716df6c04d377ff54308b56d1d8d99 WatchSource:0}: Error finding container 973c4f4deb2edc4dca9b5e652d38c2bd6b716df6c04d377ff54308b56d1d8d99: Status 404 returned error can't find the container with id 973c4f4deb2edc4dca9b5e652d38c2bd6b716df6c04d377ff54308b56d1d8d99 Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.923774 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:29 crc kubenswrapper[4720]: E0202 08:58:29.924338 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.424313029 +0000 UTC m=+144.279938585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.934135 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wppsw"] Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.941715 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fd4g7" Feb 02 08:58:29 crc kubenswrapper[4720]: I0202 08:58:29.953415 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xjd27" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.025558 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.026069 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.526058335 +0000 UTC m=+144.381683891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.060539 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.079655 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.085294 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" event={"ID":"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb","Type":"ContainerStarted","Data":"8c9a4bd1734624696b86b69718f7806740b730e4328c8ddec5777a49f0ba9f79"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.089712 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" event={"ID":"f30605a2-7f73-4f06-8e41-6430e6402b7c","Type":"ContainerStarted","Data":"c0285f0cb1a42e7a65c96b2808a2b5b097e055ec5b61dde42e2e619bf7eaabcd"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.108326 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" event={"ID":"4b5090d5-9ae8-4af6-a6b7-a4e29b671585","Type":"ContainerStarted","Data":"6625afba200527f3eb021747df6f87c9f051562b3e19944d9f3e4ba671551dfe"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.109738 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.114953 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" event={"ID":"5a45af89-b379-407a-a34b-54ede9957c2d","Type":"ContainerStarted","Data":"1ba8875a90f20d58c637d3a467c277c15f753e2193402869d19dc7feab560587"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.127689 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.127901 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.627860212 +0000 UTC m=+144.483485768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.128079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.128483 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.628475938 +0000 UTC m=+144.484101494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.158849 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" event={"ID":"6543f0bd-97f3-42f6-94c6-73241331b6ca","Type":"ContainerStarted","Data":"ac01182d59706b46b7f6e52331dc2cb4bc4af46366b64e6b5877d6d6099cd443"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.163573 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9pczc" event={"ID":"9e7ec368-a244-4b1c-a313-987332c21d0e","Type":"ContainerStarted","Data":"1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.163612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9pczc" event={"ID":"9e7ec368-a244-4b1c-a313-987332c21d0e","Type":"ContainerStarted","Data":"2108410cbc51b55fba7e66006340454bd6c71639a898b8e1ff47917a0a0c3db9"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.166555 4720 generic.go:334] "Generic (PLEG): container finished" podID="0f81bbb9-980b-47b2-af98-ba0fde0896ef" containerID="d8c568b4437775f9981efa7662375b9e703322e5cbd4924747d2e524a150cb5c" exitCode=0 Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.166602 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" event={"ID":"0f81bbb9-980b-47b2-af98-ba0fde0896ef","Type":"ContainerDied","Data":"d8c568b4437775f9981efa7662375b9e703322e5cbd4924747d2e524a150cb5c"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.192686 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" event={"ID":"da394811-8516-40db-b222-195e8e0c3e98","Type":"ContainerStarted","Data":"83dc2a6d22cedf59435c9ef0b00bf6da3d17467cbd258976f4399bba63865f0b"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.228974 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.230280 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.730264435 +0000 UTC m=+144.585889991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.235056 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz"] Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.242185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" event={"ID":"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b","Type":"ContainerStarted","Data":"973c4f4deb2edc4dca9b5e652d38c2bd6b716df6c04d377ff54308b56d1d8d99"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.269174 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" event={"ID":"62c76947-3536-4d11-b06e-fa9fbdc2d55a","Type":"ContainerStarted","Data":"c421bf7ec78805c9a5a6de58267e20c3723d3f082aae85b15faf6d6dbddc3b11"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.273121 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" event={"ID":"350b56cd-0460-44c4-a898-b4f03938f92a","Type":"ContainerStarted","Data":"9e4f85e7b70aaa3c2bb998a7a2986072f7ad48c54ffbacaf7c44291c2c99f5fb"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.274240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" event={"ID":"a7104733-5864-4e3d-855b-1e28181bb201","Type":"ContainerStarted","Data":"29104e927226f8b713a27e77f9dc187a0808c4ec21d7e8019666cb3c5812fab7"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.277893 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kbf24" event={"ID":"c1b41d56-2810-41ac-b2d6-b81b14389e44","Type":"ContainerStarted","Data":"ea40092714ed2e8e0285b16a34e128fe4a1c5517bbbbbbdab24b431a7414d419"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.303066 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" event={"ID":"abfabda3-e980-4d64-af7e-2c3f55142af6","Type":"ContainerStarted","Data":"8b61fdebf7947ef5c847c30ba53ed3e6b3465f7b522af694df12bc96c570e1d8"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.330925 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.332192 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.832173924 +0000 UTC m=+144.687799480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.336469 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-b96zd" event={"ID":"18774b0b-cedf-47b3-9113-5531e4c256f0","Type":"ContainerStarted","Data":"213787bc8730a314986a5e5d379c2273dc8f1ccb9b85c68fffa228078cfc9327"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.336523 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-b96zd" event={"ID":"18774b0b-cedf-47b3-9113-5531e4c256f0","Type":"ContainerStarted","Data":"0547cec67ee3669404e6d789a868011109fc7d7cbf0aa1bc1db1ed230d2d317f"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.337514 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.342317 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-b96zd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.342376 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b96zd" podUID="18774b0b-cedf-47b3-9113-5531e4c256f0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.361872 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hkl"] Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.361946 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bmflj"] Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.372073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" event={"ID":"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce","Type":"ContainerStarted","Data":"a49c9d84406109b770bddfbea5fbb20278d039d569462cecf42b736c70302f3a"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.372123 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" event={"ID":"a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce","Type":"ContainerStarted","Data":"3d7d73c174d7e34007d0a53376e5296d93e86f448fa17380774600bbc644df41"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.383233 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" event={"ID":"a54ee1bd-0add-4c97-8a2e-af3963dadaf3","Type":"ContainerStarted","Data":"7d717451e1b344e46d5a5a503e131f103f89d4a333b9e0f647959587968268ca"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.394203 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" event={"ID":"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e","Type":"ContainerStarted","Data":"6a3aeb84b87002771641c3b617a543cf03fc0e8276c56a2390aa51892c913b57"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.416019 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" event={"ID":"032de0ec-0597-4308-bafc-071b70bbc9cd","Type":"ContainerStarted","Data":"ec79f72247c396c16ae33f17cc64b551fc99597b4475ad406409775ee95a8282"} Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.437168 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.438479 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:30.93845331 +0000 UTC m=+144.794078866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.443152 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.539220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.541254 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.041238564 +0000 UTC m=+144.896864120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.643550 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.644076 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.144059228 +0000 UTC m=+144.999684784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.745859 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.746270 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.246257585 +0000 UTC m=+145.101883141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.798049 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qrxgd"] Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.809378 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr"] Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.855739 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.856328 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.356304013 +0000 UTC m=+145.211929579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:30 crc kubenswrapper[4720]: I0202 08:58:30.984015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:30 crc kubenswrapper[4720]: E0202 08:58:30.984503 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.484485096 +0000 UTC m=+145.340110652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.087438 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.088131 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.588085781 +0000 UTC m=+145.443711337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.088486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.089005 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.588993975 +0000 UTC m=+145.444619531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: W0202 08:58:31.185896 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa3da67d_1641_4c84_9d0e_51788244f887.slice/crio-c4621ca40aecc78c7ffad0d36734e5c9bc46187eb2e00c4d2cf0574f3ab1ed1a WatchSource:0}: Error finding container c4621ca40aecc78c7ffad0d36734e5c9bc46187eb2e00c4d2cf0574f3ab1ed1a: Status 404 returned error can't find the container with id c4621ca40aecc78c7ffad0d36734e5c9bc46187eb2e00c4d2cf0574f3ab1ed1a Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.206637 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.206970 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.706950375 +0000 UTC m=+145.562575931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.248263 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xz2ts" podStartSLOduration=123.248241361 podStartE2EDuration="2m3.248241361s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:31.211734543 +0000 UTC m=+145.067360119" watchObservedRunningTime="2026-02-02 08:58:31.248241361 +0000 UTC m=+145.103866917" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.285314 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-d77qs"] Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.324752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.325442 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.825426898 +0000 UTC m=+145.681052454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.424620 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6"] Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.431678 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.440040 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:31.940008438 +0000 UTC m=+145.795633994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.466555 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9pczc" podStartSLOduration=124.466535928 podStartE2EDuration="2m4.466535928s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:31.465172602 +0000 UTC m=+145.320798158" watchObservedRunningTime="2026-02-02 08:58:31.466535928 +0000 UTC m=+145.322161484" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.472289 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" event={"ID":"47b6705e-b780-4630-8d02-68d663d146cd","Type":"ContainerStarted","Data":"d416d3126e753e47dfd00b8bd691b4af1ab88a388f1c7af4cf150c22090fbcbe"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.511915 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrxgd" event={"ID":"fa3da67d-1641-4c84-9d0e-51788244f887","Type":"ContainerStarted","Data":"c4621ca40aecc78c7ffad0d36734e5c9bc46187eb2e00c4d2cf0574f3ab1ed1a"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.538453 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" event={"ID":"3465db32-883c-4dbd-8921-386c5f9de67a","Type":"ContainerStarted","Data":"b85119e8ea17a8d1539d6ced093dc884bdf70aa5a4fb3409b9b6532a6b5e2f3a"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.545196 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.545672 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.045659767 +0000 UTC m=+145.901285323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.571478 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" event={"ID":"1bae959a-c36d-4986-80e0-dad2f0861334","Type":"ContainerStarted","Data":"3abc417a2a4c363cfd7024285692ab873a74cb561879c83eb6885e4e57a10259"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.591795 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" event={"ID":"da394811-8516-40db-b222-195e8e0c3e98","Type":"ContainerStarted","Data":"db1162ad3677047b09b4d802572c08df302706b6fb7db7a186c34769a0f959b0"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.630210 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz"] Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.636444 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg"] Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.646241 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.647585 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.147566637 +0000 UTC m=+146.003192193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.683033 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" podStartSLOduration=124.683003166 podStartE2EDuration="2m4.683003166s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:31.641281168 +0000 UTC m=+145.496906724" watchObservedRunningTime="2026-02-02 08:58:31.683003166 +0000 UTC m=+145.538628722" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.707239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" event={"ID":"b316d3e2-e731-49e3-8bad-43c1e276dd43","Type":"ContainerStarted","Data":"de4def20307a047d73e7aaff8deb6dec79b760badc676eb56bc97b9a1102743c"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.713696 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kbf24" event={"ID":"c1b41d56-2810-41ac-b2d6-b81b14389e44","Type":"ContainerStarted","Data":"08c187ced67969cf68b720d1112a0eea3f2d741684ac27823c09f2db205a0e2e"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.715752 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" event={"ID":"80b52220-efb0-4101-bb96-68169372693b","Type":"ContainerStarted","Data":"252332672ef3e50f703ce887947509a23b2b3ba5a2ca93028899e4dd13da2b1f"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.720999 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" event={"ID":"a54ee1bd-0add-4c97-8a2e-af3963dadaf3","Type":"ContainerStarted","Data":"fdea4832c311e828900d5fdf5a685afcbcddae957793b6a7afc910af106235b4"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.722325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" event={"ID":"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb","Type":"ContainerStarted","Data":"7f0507baefdd68255014f2c93f95df985ebf86be92cc10249bbfd63cb0410fe8"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.722834 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.724345 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fd4g7" event={"ID":"a8765666-2f9b-4523-bbcd-5087b9ae8416","Type":"ContainerStarted","Data":"44cad7acaa8051de3e14a8274f59521f0c9f0c5fe1ae6fae75cc7b2fbd24ab54"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.730949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" event={"ID":"5a45af89-b379-407a-a34b-54ede9957c2d","Type":"ContainerStarted","Data":"8403d8367d2e41688c06cc56be907a39533b2c918b59016eae29b2fb82b43914"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.748859 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.750769 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.25071339 +0000 UTC m=+146.106338946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.750860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" event={"ID":"350b56cd-0460-44c4-a898-b4f03938f92a","Type":"ContainerStarted","Data":"58eb9a8a2cd9ee8aece0c669524c6da7ad3866dd7349eb2fa439a0d41fa7a1f6"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.752046 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.767930 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" event={"ID":"38b584e2-923b-49f3-9681-a030512550d8","Type":"ContainerStarted","Data":"d4f99230b12319cd40ddee17d374fa7a59d2bb23723bc4584d0381d0b017a79c"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.794582 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" podStartSLOduration=124.794557354 podStartE2EDuration="2m4.794557354s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:31.784188866 +0000 UTC m=+145.639814422" watchObservedRunningTime="2026-02-02 08:58:31.794557354 +0000 UTC m=+145.650182910" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.831631 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-plbzk" podStartSLOduration=124.831607886 podStartE2EDuration="2m4.831607886s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:31.829085449 +0000 UTC m=+145.684711005" watchObservedRunningTime="2026-02-02 08:58:31.831607886 +0000 UTC m=+145.687233442" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.837382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" event={"ID":"a7104733-5864-4e3d-855b-1e28181bb201","Type":"ContainerStarted","Data":"bb66dd210c8cf5e0f3a118ea26a07defba74704be66d613b191dd95d8a458fee"} Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.843574 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-b96zd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.843625 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b96zd" podUID="18774b0b-cedf-47b3-9113-5531e4c256f0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.850051 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.851404 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.351379086 +0000 UTC m=+146.207004642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.929167 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-b96zd" podStartSLOduration=124.929137259 podStartE2EDuration="2m4.929137259s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:31.906046201 +0000 UTC m=+145.761671767" watchObservedRunningTime="2026-02-02 08:58:31.929137259 +0000 UTC m=+145.784762815" Feb 02 08:58:31 crc kubenswrapper[4720]: I0202 08:58:31.985098 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:31 crc kubenswrapper[4720]: E0202 08:58:31.986046 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.486033083 +0000 UTC m=+146.341658639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.044607 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" podStartSLOduration=124.044589781 podStartE2EDuration="2m4.044589781s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:32.042629999 +0000 UTC m=+145.898255555" watchObservedRunningTime="2026-02-02 08:58:32.044589781 +0000 UTC m=+145.900215337" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.080546 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" podStartSLOduration=125.080517363 podStartE2EDuration="2m5.080517363s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:32.064501015 +0000 UTC m=+145.920126571" watchObservedRunningTime="2026-02-02 08:58:32.080517363 +0000 UTC m=+145.936142919" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.086685 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.087350 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.587326836 +0000 UTC m=+146.442952392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.117955 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kbf24" podStartSLOduration=125.117872504 podStartE2EDuration="2m5.117872504s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:32.111407881 +0000 UTC m=+145.967033437" watchObservedRunningTime="2026-02-02 08:58:32.117872504 +0000 UTC m=+145.973498060" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.138306 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmm7k" podStartSLOduration=125.13828383 podStartE2EDuration="2m5.13828383s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:32.137416968 +0000 UTC m=+145.993042524" watchObservedRunningTime="2026-02-02 08:58:32.13828383 +0000 UTC m=+145.993909386" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.169627 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" podStartSLOduration=125.1696058 podStartE2EDuration="2m5.1696058s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:32.167503263 +0000 UTC m=+146.023128819" watchObservedRunningTime="2026-02-02 08:58:32.1696058 +0000 UTC m=+146.025231356" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.189019 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.189487 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.689467592 +0000 UTC m=+146.545093148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.292560 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.292766 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.792734958 +0000 UTC m=+146.648360504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.293130 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.293539 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.793526299 +0000 UTC m=+146.649151845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.396524 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.397462 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.897430062 +0000 UTC m=+146.753055618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.405553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.406260 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:32.906230808 +0000 UTC m=+146.761856364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.494561 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.508388 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.514651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.515467 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.015435503 +0000 UTC m=+146.871061049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.535162 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:32 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:32 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:32 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.535230 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.534849 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5r75z"] Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.616552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.618421 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc"] Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.622074 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.122047829 +0000 UTC m=+146.977673385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.653585 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.723230 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.723692 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.22367084 +0000 UTC m=+147.079296396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.809680 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xjd27"] Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.827108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.827482 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.327468621 +0000 UTC m=+147.183094177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.840735 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm"] Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.850223 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9"] Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.929899 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.930154 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.4301102 +0000 UTC m=+147.285735756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.930773 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:32 crc kubenswrapper[4720]: E0202 08:58:32.931255 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.43123782 +0000 UTC m=+147.286863376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.939838 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dqfpz" event={"ID":"38b584e2-923b-49f3-9681-a030512550d8","Type":"ContainerStarted","Data":"8e843af81307a7d2fc587cad882089c9bb7d2ce69678e0e6b06009a7ecc66986"} Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.952760 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" event={"ID":"82d280cc-a79e-42ef-a923-35a2faa20a90","Type":"ContainerStarted","Data":"b4834bf42b029d30cd4e6ef1953a6fd1616badcbd5639e2772e20537c49402b7"} Feb 02 08:58:32 crc kubenswrapper[4720]: I0202 08:58:32.964522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" event={"ID":"3465db32-883c-4dbd-8921-386c5f9de67a","Type":"ContainerStarted","Data":"79575edd94c6da99a1661376e3ee20d47d55a21f84e9325f4465ffab7379b2a2"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.004992 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" event={"ID":"abfabda3-e980-4d64-af7e-2c3f55142af6","Type":"ContainerStarted","Data":"564d57cb4b75c497254d8b0a3a6a9510a8a1cd723fbe757acf82aeddca013d8f"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.006168 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.012299 4720 patch_prober.go:28] interesting pod/console-operator-58897d9998-bf8sq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.012366 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" podUID="abfabda3-e980-4d64-af7e-2c3f55142af6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.013332 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c"] Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.027601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp"] Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.033050 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.034348 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.534330551 +0000 UTC m=+147.389956107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.045989 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" event={"ID":"f30605a2-7f73-4f06-8e41-6430e6402b7c","Type":"ContainerStarted","Data":"f6f7e4869c36df41ec5ad76d46e45e024db6b9d36fc7801f8bd5617352cc07ba"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.046831 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-db26k"] Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.048994 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" event={"ID":"5164c8ec-dcff-4756-9459-d9c1f01a1e85","Type":"ContainerStarted","Data":"cfaa77eb306283666229eb0746d2996ee98bf11b510a78c8ef826595481738ff"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.053093 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wppsw" podStartSLOduration=125.053065533 podStartE2EDuration="2m5.053065533s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.012742653 +0000 UTC m=+146.868368199" watchObservedRunningTime="2026-02-02 08:58:33.053065533 +0000 UTC m=+146.908691089" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.053196 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" event={"ID":"d9d36a5f-dbbe-46cd-9139-548ee7a5ea0b","Type":"ContainerStarted","Data":"cc3197a4499e6c18971384598458aa57a163100ce3ec70715596a8a35bed8ea2"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.056007 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.063927 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv"] Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.067279 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-65c4r"] Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.088393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" event={"ID":"6543f0bd-97f3-42f6-94c6-73241331b6ca","Type":"ContainerStarted","Data":"992d455f7e695b1eae14ec7806ddde55f968dca21771e563f6fd379d645b2ee2"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.092263 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" event={"ID":"1de05e78-bbe5-4a95-85f7-323e2e1c76f3","Type":"ContainerStarted","Data":"0a894a64a3505e566c7d3fcc275011a55884a0d35a1a5f323d3a56298e84e0c0"} Feb 02 08:58:33 crc kubenswrapper[4720]: W0202 08:58:33.092333 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb25c8152_1fa8_48e2_9e86_32fdbd9fd0cd.slice/crio-13c2b6c74c18cd770ae946d397b943e0ee997aa2dc5f7df33a45e8b0253cbbe9 WatchSource:0}: Error finding container 13c2b6c74c18cd770ae946d397b943e0ee997aa2dc5f7df33a45e8b0253cbbe9: Status 404 returned error can't find the container with id 13c2b6c74c18cd770ae946d397b943e0ee997aa2dc5f7df33a45e8b0253cbbe9 Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.099186 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" event={"ID":"f4f370f1-d216-4b7b-86e6-f18ef12e9843","Type":"ContainerStarted","Data":"a36cea549c3070197818f4dc32d85b1dfef13c5836762918b8b1d22538843857"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.100995 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" event={"ID":"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e","Type":"ContainerStarted","Data":"a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.101555 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.107489 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qfsxp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.107540 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" podUID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.110619 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" podStartSLOduration=126.110603554 podStartE2EDuration="2m6.110603554s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.076796199 +0000 UTC m=+146.932421755" watchObservedRunningTime="2026-02-02 08:58:33.110603554 +0000 UTC m=+146.966229100" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.120252 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" event={"ID":"7ed9a100-019b-4f35-ab4c-187b087a3e99","Type":"ContainerStarted","Data":"a297a9e65a3ae93266ea9a5e16ef352d4c7dc06c7d54e3f19d2e2d95836464b3"} Feb 02 08:58:33 crc kubenswrapper[4720]: W0202 08:58:33.123101 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37615bea_3d49_45d6_b190_450e2e078977.slice/crio-26c0295f25d532cd803612227cb5751c52495a86f4cda77ff6c2742311d2bbd2 WatchSource:0}: Error finding container 26c0295f25d532cd803612227cb5751c52495a86f4cda77ff6c2742311d2bbd2: Status 404 returned error can't find the container with id 26c0295f25d532cd803612227cb5751c52495a86f4cda77ff6c2742311d2bbd2 Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.135269 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.136799 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.636786146 +0000 UTC m=+147.492411702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.157651 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" event={"ID":"0f81bbb9-980b-47b2-af98-ba0fde0896ef","Type":"ContainerStarted","Data":"056545d5d4a9718e301470f8ba1ab7614c0176753fd24589466bede916423aa9"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.165019 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" podStartSLOduration=125.164989761 podStartE2EDuration="2m5.164989761s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.154205492 +0000 UTC m=+147.009831048" watchObservedRunningTime="2026-02-02 08:58:33.164989761 +0000 UTC m=+147.020615317" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.165190 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" podStartSLOduration=126.165184616 podStartE2EDuration="2m6.165184616s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.113357118 +0000 UTC m=+146.968982664" watchObservedRunningTime="2026-02-02 08:58:33.165184616 +0000 UTC m=+147.020810172" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.188691 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl6l" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.195472 4720 generic.go:334] "Generic (PLEG): container finished" podID="62c76947-3536-4d11-b06e-fa9fbdc2d55a" containerID="214dd5cadb8d8d569e34a0d55ef7c85010a1c7bc60264764f9e2463cb2f39686" exitCode=0 Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.195581 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" event={"ID":"62c76947-3536-4d11-b06e-fa9fbdc2d55a","Type":"ContainerDied","Data":"214dd5cadb8d8d569e34a0d55ef7c85010a1c7bc60264764f9e2463cb2f39686"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.214396 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" event={"ID":"a7104733-5864-4e3d-855b-1e28181bb201","Type":"ContainerStarted","Data":"d4db58e5e0fd9bd960a3762bd634cb27cd721a98844821083d32d9c08cb922b9"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.215305 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" podStartSLOduration=125.215280348 podStartE2EDuration="2m5.215280348s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.177678231 +0000 UTC m=+147.033303787" watchObservedRunningTime="2026-02-02 08:58:33.215280348 +0000 UTC m=+147.070905914" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.258830 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.260370 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" podStartSLOduration=125.260349955 podStartE2EDuration="2m5.260349955s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.214088866 +0000 UTC m=+147.069714422" watchObservedRunningTime="2026-02-02 08:58:33.260349955 +0000 UTC m=+147.115975511" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.261020 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.262515 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.264328 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.764304541 +0000 UTC m=+147.619930097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.283052 4720 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5kntx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]log ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]etcd ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/max-in-flight-filter ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 08:58:33 crc kubenswrapper[4720]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 08:58:33 crc kubenswrapper[4720]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-startinformers ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 08:58:33 crc kubenswrapper[4720]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 08:58:33 crc kubenswrapper[4720]: livez check failed Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.283122 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" podUID="a8591d8f-4bd7-4eaf-a781-3e5bbc7c03ce" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.304618 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" podStartSLOduration=125.300170452 podStartE2EDuration="2m5.300170452s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.299412582 +0000 UTC m=+147.155038138" watchObservedRunningTime="2026-02-02 08:58:33.300170452 +0000 UTC m=+147.155796008" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.318296 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" event={"ID":"032de0ec-0597-4308-bafc-071b70bbc9cd","Type":"ContainerStarted","Data":"994e43a14cb466a0c23f2bfe79bc2eebfc97d06cbc8d392baabd47784ff43361"} Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.319502 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-b96zd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.319540 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b96zd" podUID="18774b0b-cedf-47b3-9113-5531e4c256f0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.338202 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wn5z2" podStartSLOduration=125.33818475 podStartE2EDuration="2m5.33818475s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.337534483 +0000 UTC m=+147.193160039" watchObservedRunningTime="2026-02-02 08:58:33.33818475 +0000 UTC m=+147.193810306" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.362285 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fd4g7" podStartSLOduration=7.362265745 podStartE2EDuration="7.362265745s" podCreationTimestamp="2026-02-02 08:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:33.361650228 +0000 UTC m=+147.217275794" watchObservedRunningTime="2026-02-02 08:58:33.362265745 +0000 UTC m=+147.217891301" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.366845 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.367255 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.867237978 +0000 UTC m=+147.722863534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.467901 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.470085 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:33.970069933 +0000 UTC m=+147.825695489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.529017 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:33 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:33 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:33 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.529068 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.559036 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.559202 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.576778 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.577262 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.077246883 +0000 UTC m=+147.932872429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.677761 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.678801 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.178782683 +0000 UTC m=+148.034408239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.781773 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.782307 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.282265025 +0000 UTC m=+148.137890581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.883560 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.884338 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.384287257 +0000 UTC m=+148.239912813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:33 crc kubenswrapper[4720]: I0202 08:58:33.985824 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:33 crc kubenswrapper[4720]: E0202 08:58:33.986300 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.486277509 +0000 UTC m=+148.341903055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.069201 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.087062 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.087547 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.587522121 +0000 UTC m=+148.443147677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.189306 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.189928 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.689901093 +0000 UTC m=+148.545526649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.290759 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.291202 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.791169946 +0000 UTC m=+148.646795502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.291443 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.291788 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.791773701 +0000 UTC m=+148.647399247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.355233 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" event={"ID":"62c76947-3536-4d11-b06e-fa9fbdc2d55a","Type":"ContainerStarted","Data":"a25dd58a4c70c5011cd13235ce42ea2e882c86f2ffb64dc6f3ac337c7e36202e"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.356142 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.359075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" event={"ID":"b316d3e2-e731-49e3-8bad-43c1e276dd43","Type":"ContainerStarted","Data":"39330b3ef7e1d17f30bd7cbc28ffc535a6bceec741f858777922faca64a478b8"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.363397 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" event={"ID":"47b6705e-b780-4630-8d02-68d663d146cd","Type":"ContainerStarted","Data":"0e78a74522c86d02874a5ddc96de39daf37a49a8b0c02047d6e6f88b86c306cb"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.368312 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" event={"ID":"7fc2c97f-7857-4792-a92b-88c1415b652f","Type":"ContainerStarted","Data":"ce9d365dca9ecf5bc126a00e905204920b68ef5c51f4fdfc60151c083916a33e"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.368374 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" event={"ID":"7fc2c97f-7857-4792-a92b-88c1415b652f","Type":"ContainerStarted","Data":"5b27380e893b9ce10cb7877e2a5cdfa39f9c13d0a40df22b6dde10559f779a08"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.394569 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.394795 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.89475126 +0000 UTC m=+148.750376826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.395047 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.395525 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.895511141 +0000 UTC m=+148.751136697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.401707 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" event={"ID":"1bae959a-c36d-4986-80e0-dad2f0861334","Type":"ContainerStarted","Data":"e8b216c56233be92e8047f2482def6574585655c483ca69f308e11d0bb9396db"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.420420 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tb5vz" event={"ID":"7ed9a100-019b-4f35-ab4c-187b087a3e99","Type":"ContainerStarted","Data":"ead4a537d527b33161d6f5900188ad4d05f4fd6b02ad9d10e47bca8be03fc350"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.440952 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-d77qs" podStartSLOduration=126.440929977 podStartE2EDuration="2m6.440929977s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.437220587 +0000 UTC m=+148.292846133" watchObservedRunningTime="2026-02-02 08:58:34.440929977 +0000 UTC m=+148.296555533" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.448908 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" podStartSLOduration=127.44887554 podStartE2EDuration="2m7.44887554s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.414178371 +0000 UTC m=+148.269803927" watchObservedRunningTime="2026-02-02 08:58:34.44887554 +0000 UTC m=+148.304501096" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.477371 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pkzv8"] Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.478382 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.485920 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bmflj" podStartSLOduration=127.485867571 podStartE2EDuration="2m7.485867571s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.485548382 +0000 UTC m=+148.341173938" watchObservedRunningTime="2026-02-02 08:58:34.485867571 +0000 UTC m=+148.341493127" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.486582 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrxgd" event={"ID":"fa3da67d-1641-4c84-9d0e-51788244f887","Type":"ContainerStarted","Data":"7030b214ed84d47111d1883532039383778dfca2924cf788ea4fa18a55c58f9a"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.486637 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrxgd" event={"ID":"fa3da67d-1641-4c84-9d0e-51788244f887","Type":"ContainerStarted","Data":"8a2ca2f29b19ed1fd31ea23e76e4d6716eaca42a19139b0b2d3bb661a4321431"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.488107 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.495611 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.497638 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkzv8"] Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.497704 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.497985 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.997966615 +0000 UTC m=+148.853592171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.498590 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.499371 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:34.999345812 +0000 UTC m=+148.854971358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.535302 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" event={"ID":"b77d8e1c-b943-452f-b1b0-21ac07685158","Type":"ContainerStarted","Data":"e702ffb49de2394bd3cf44dbad8caf7a4a91a0abf74d30fac803b372c873d7ef"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.535368 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" event={"ID":"b77d8e1c-b943-452f-b1b0-21ac07685158","Type":"ContainerStarted","Data":"e67d2a3be09231e31ef736086274d4bbc3b90cad17636adcc7f8bb91556d44ae"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.545895 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8ztlm" podStartSLOduration=127.545857588 podStartE2EDuration="2m7.545857588s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.536189769 +0000 UTC m=+148.391815325" watchObservedRunningTime="2026-02-02 08:58:34.545857588 +0000 UTC m=+148.401483144" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.556807 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:34 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:34 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:34 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.557011 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.589234 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hkl" podStartSLOduration=127.589210699 podStartE2EDuration="2m7.589210699s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.587810681 +0000 UTC m=+148.443436227" watchObservedRunningTime="2026-02-02 08:58:34.589210699 +0000 UTC m=+148.444836255" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.599685 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.599797 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.099776901 +0000 UTC m=+148.955402447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.600214 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.600258 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-utilities\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.600281 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-catalog-content\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.600350 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqfbw\" (UniqueName: \"kubernetes.io/projected/9d19aa30-4d50-415b-9d62-b913ad57185e-kube-api-access-lqfbw\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.600670 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.100658825 +0000 UTC m=+148.956284381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.624560 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" event={"ID":"6543f0bd-97f3-42f6-94c6-73241331b6ca","Type":"ContainerStarted","Data":"f1e3ae617097ed34dee3c12debf3e68294f879ed821b90292ad7d8b27bc9f593"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.629799 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" event={"ID":"b5843997-ba37-4e10-ae40-67c335d91321","Type":"ContainerStarted","Data":"93569fff1e0883dd3fb4cc7059431a4e1ad298704c693d58bbb157dc17017cf5"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.635664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" event={"ID":"1de05e78-bbe5-4a95-85f7-323e2e1c76f3","Type":"ContainerStarted","Data":"d2036b66de8fb48cf7613c82cc20cdb67e393f3c16e771abd09634d6b1f087b1"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.648711 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qrxgd" podStartSLOduration=8.648693392 podStartE2EDuration="8.648693392s" podCreationTimestamp="2026-02-02 08:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.647263703 +0000 UTC m=+148.502889269" watchObservedRunningTime="2026-02-02 08:58:34.648693392 +0000 UTC m=+148.504318948" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.671686 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrvwk"] Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701151 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" event={"ID":"a54ee1bd-0add-4c97-8a2e-af3963dadaf3","Type":"ContainerStarted","Data":"e2aa9034504e3077f14c1f124bc2426282b54732ac97c5f6c6b100639c2b2186"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701225 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ssn4h" event={"ID":"f30605a2-7f73-4f06-8e41-6430e6402b7c","Type":"ContainerStarted","Data":"6380545a963a84120db0421f07f71f6bc0aef5d414ee2dbb13eeb6556d057699"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701248 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrvwk"] Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.701261 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.20124483 +0000 UTC m=+149.056870386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701362 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701551 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqfbw\" (UniqueName: \"kubernetes.io/projected/9d19aa30-4d50-415b-9d62-b913ad57185e-kube-api-access-lqfbw\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-utilities\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.701705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-catalog-content\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.702801 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.202784491 +0000 UTC m=+149.058410047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.703101 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-utilities\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.708920 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.716385 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-catalog-content\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.721688 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" event={"ID":"f4f370f1-d216-4b7b-86e6-f18ef12e9843","Type":"ContainerStarted","Data":"5669eda838466404f77ae8960f8584fb15eb8b2155ba30f77e6aa65b6c908b02"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.732781 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g98rt" podStartSLOduration=127.732760603 podStartE2EDuration="2m7.732760603s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.729957068 +0000 UTC m=+148.585582624" watchObservedRunningTime="2026-02-02 08:58:34.732760603 +0000 UTC m=+148.588386159" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.748865 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fd4g7" event={"ID":"a8765666-2f9b-4523-bbcd-5087b9ae8416","Type":"ContainerStarted","Data":"4cdc0384a6a92a0265158ca4ebbba649155f5493192984bfdae6f74324087f26"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.761702 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dctsk" podStartSLOduration=126.761665508 podStartE2EDuration="2m6.761665508s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.758658997 +0000 UTC m=+148.614284553" watchObservedRunningTime="2026-02-02 08:58:34.761665508 +0000 UTC m=+148.617291064" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.771146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqfbw\" (UniqueName: \"kubernetes.io/projected/9d19aa30-4d50-415b-9d62-b913ad57185e-kube-api-access-lqfbw\") pod \"community-operators-pkzv8\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.774437 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" event={"ID":"1b688028-fc1d-4943-99e8-101d0ab88506","Type":"ContainerStarted","Data":"b4ea25f260cd6d292c7cbf6cf1a0ffd323bd2f75ec21685925e97e4435d9f887"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.774483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" event={"ID":"1b688028-fc1d-4943-99e8-101d0ab88506","Type":"ContainerStarted","Data":"1ae67b52612aaf7cea6afa13d9a9d8a973fa7ba78a127810b2d89abc8f270d1b"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.775473 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.786254 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" event={"ID":"032de0ec-0597-4308-bafc-071b70bbc9cd","Type":"ContainerStarted","Data":"5f02fbb44f96de45a28e9d202c3b1446ae87ce754a022d0b09174c156457bcde"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.795355 4720 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m5lr9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.795422 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" podUID="1b688028-fc1d-4943-99e8-101d0ab88506" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.802618 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.802945 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8776z\" (UniqueName: \"kubernetes.io/projected/d422076d-6f6a-42ea-a820-4aa8399e4a8c-kube-api-access-8776z\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.803167 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-catalog-content\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.803459 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-utilities\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.805940 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.305923104 +0000 UTC m=+149.161548660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.810484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" event={"ID":"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c","Type":"ContainerStarted","Data":"311d35c0d247a206fa57c676c22a301396082db23fef1ba67306d17b4ed38f45"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.810537 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" event={"ID":"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c","Type":"ContainerStarted","Data":"0ef9143f2cea2d582b35f078d9d8e7c8ad99be3790a73dbce1cf84e9f041f453"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.811562 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.814632 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" event={"ID":"80b52220-efb0-4101-bb96-68169372693b","Type":"ContainerStarted","Data":"b66df09da4e289889131ecd2b213d7fba89d58b772d16feeb7e5c85c44b7d2c9"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.834477 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" event={"ID":"37615bea-3d49-45d6-b190-450e2e078977","Type":"ContainerStarted","Data":"384d267a4ffc7043539cbe62adf85065eab6bdf2cc90b9895fbb7f032a7a7b8d"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.834529 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" event={"ID":"37615bea-3d49-45d6-b190-450e2e078977","Type":"ContainerStarted","Data":"26c0295f25d532cd803612227cb5751c52495a86f4cda77ff6c2742311d2bbd2"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.848485 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hgccg" podStartSLOduration=127.848466453 podStartE2EDuration="2m7.848466453s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.846551142 +0000 UTC m=+148.702176698" watchObservedRunningTime="2026-02-02 08:58:34.848466453 +0000 UTC m=+148.704092009" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.876849 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" event={"ID":"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd","Type":"ContainerStarted","Data":"723f5a8b614b980be0f1fd53044b283b9cd2221338af1d37c7219826e87420d0"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.876932 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" event={"ID":"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd","Type":"ContainerStarted","Data":"13c2b6c74c18cd770ae946d397b943e0ee997aa2dc5f7df33a45e8b0253cbbe9"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.897243 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.902756 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" podStartSLOduration=126.902729637 podStartE2EDuration="2m6.902729637s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.881666712 +0000 UTC m=+148.737292268" watchObservedRunningTime="2026-02-02 08:58:34.902729637 +0000 UTC m=+148.758355193" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.906433 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-catalog-content\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.906553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.906635 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-utilities\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.906669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8776z\" (UniqueName: \"kubernetes.io/projected/d422076d-6f6a-42ea-a820-4aa8399e4a8c-kube-api-access-8776z\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.908617 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-catalog-content\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.909358 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-utilities\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: E0202 08:58:34.910450 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.410433282 +0000 UTC m=+149.266059058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.921255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xjd27" event={"ID":"635f5723-3a45-43b1-9745-2261943f0de1","Type":"ContainerStarted","Data":"ac79e9e82f81dfc7d4fbc1a5b080cd6cf36c54da07ec908b9a22b27fb6059fdb"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.921299 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xjd27" event={"ID":"635f5723-3a45-43b1-9745-2261943f0de1","Type":"ContainerStarted","Data":"0748a87bd40fbde0746fb0b47ef62a3acef16a63658fbed36a2102e928021581"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.921311 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6hhb"] Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.922353 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.926224 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" event={"ID":"5164c8ec-dcff-4756-9459-d9c1f01a1e85","Type":"ContainerStarted","Data":"e10c5cc1238bfbab841b7f6fc18a946b5f15cdbec120536c275a9963454be790"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.927546 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6hhb"] Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.950582 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" event={"ID":"82d280cc-a79e-42ef-a923-35a2faa20a90","Type":"ContainerStarted","Data":"415e71f08760b3943a2b4dfd8aac9f10075db025e94bca1ea4692dfd9535fe73"} Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.950635 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.978373 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.978533 4720 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rn4r6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.978574 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" podUID="82d280cc-a79e-42ef-a923-35a2faa20a90" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.978930 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8776z\" (UniqueName: \"kubernetes.io/projected/d422076d-6f6a-42ea-a820-4aa8399e4a8c-kube-api-access-8776z\") pod \"certified-operators-lrvwk\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:34 crc kubenswrapper[4720]: I0202 08:58:34.986551 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bf8sq" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.018871 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.019228 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.519190456 +0000 UTC m=+149.374816012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.019492 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.019721 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hl9\" (UniqueName: \"kubernetes.io/projected/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-kube-api-access-54hl9\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.019867 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-utilities\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.019930 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-catalog-content\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.020255 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" podStartSLOduration=128.020235304 podStartE2EDuration="2m8.020235304s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.018873257 +0000 UTC m=+148.874498813" watchObservedRunningTime="2026-02-02 08:58:35.020235304 +0000 UTC m=+148.875860860" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.020470 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d9xbv" podStartSLOduration=128.02046636 podStartE2EDuration="2m8.02046636s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:34.941119114 +0000 UTC m=+148.796744670" watchObservedRunningTime="2026-02-02 08:58:35.02046636 +0000 UTC m=+148.876091916" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.021405 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6l4dl" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.026712 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.526684846 +0000 UTC m=+149.382310402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.047099 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" podStartSLOduration=127.047079083 podStartE2EDuration="2m7.047079083s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.046951379 +0000 UTC m=+148.902576935" watchObservedRunningTime="2026-02-02 08:58:35.047079083 +0000 UTC m=+148.902704629" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.049636 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.065084 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7rqfx"] Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.088271 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpswr" podStartSLOduration=128.088249925 podStartE2EDuration="2m8.088249925s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.086753175 +0000 UTC m=+148.942378731" watchObservedRunningTime="2026-02-02 08:58:35.088249925 +0000 UTC m=+148.943875481" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.127849 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.131203 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.631160235 +0000 UTC m=+149.486785791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.144802 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.144952 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145003 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145057 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145088 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hl9\" (UniqueName: \"kubernetes.io/projected/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-kube-api-access-54hl9\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145127 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-utilities\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-catalog-content\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.145935 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-catalog-content\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.158730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.159367 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.65934901 +0000 UTC m=+149.514974566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.161963 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-utilities\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.173352 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.173553 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rqfx"] Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.173740 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.174040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.174681 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.190415 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xf5vc" podStartSLOduration=128.190384241 podStartE2EDuration="2m8.190384241s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.179403347 +0000 UTC m=+149.035028903" watchObservedRunningTime="2026-02-02 08:58:35.190384241 +0000 UTC m=+149.046009797" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.202049 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hl9\" (UniqueName: \"kubernetes.io/projected/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-kube-api-access-54hl9\") pod \"community-operators-p6hhb\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.211353 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.213601 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" podStartSLOduration=127.213582583 podStartE2EDuration="2m7.213582583s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.21199747 +0000 UTC m=+149.067623026" watchObservedRunningTime="2026-02-02 08:58:35.213582583 +0000 UTC m=+149.069208139" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.227536 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.262331 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.263153 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.263424 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-utilities\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.263493 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpvv\" (UniqueName: \"kubernetes.io/projected/d24cad55-5a84-4608-80ad-c6242f4650c7-kube-api-access-nhpvv\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.263525 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-catalog-content\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.263634 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.763615673 +0000 UTC m=+149.619241229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.281320 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.371798 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xjd27" podStartSLOduration=9.37177516 podStartE2EDuration="9.37177516s" podCreationTimestamp="2026-02-02 08:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.324344529 +0000 UTC m=+149.179970085" watchObservedRunningTime="2026-02-02 08:58:35.37177516 +0000 UTC m=+149.227400716" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.375842 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-utilities\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.375924 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-utilities\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.375962 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.376018 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpvv\" (UniqueName: \"kubernetes.io/projected/d24cad55-5a84-4608-80ad-c6242f4650c7-kube-api-access-nhpvv\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.376048 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-catalog-content\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.376315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-catalog-content\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.376667 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.87665069 +0000 UTC m=+149.732276246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.404988 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpvv\" (UniqueName: \"kubernetes.io/projected/d24cad55-5a84-4608-80ad-c6242f4650c7-kube-api-access-nhpvv\") pod \"certified-operators-7rqfx\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.480247 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.480711 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:35.980687687 +0000 UTC m=+149.836313233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.511068 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:35 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:35 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:35 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.511132 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.528531 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.581997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.582510 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.082486313 +0000 UTC m=+149.938111869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.600118 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" podStartSLOduration=127.600091955 podStartE2EDuration="2m7.600091955s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:35.466555508 +0000 UTC m=+149.322181064" watchObservedRunningTime="2026-02-02 08:58:35.600091955 +0000 UTC m=+149.455717511" Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.615089 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkzv8"] Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.683318 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.683708 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.183686645 +0000 UTC m=+150.039312201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.757585 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrvwk"] Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.784974 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.785386 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.285371038 +0000 UTC m=+150.140996594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: W0202 08:58:35.790488 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd422076d_6f6a_42ea_a820_4aa8399e4a8c.slice/crio-4e8bc3fb716ad73554defc724a87566e04a61271d6048165b8c97478dc3a1540 WatchSource:0}: Error finding container 4e8bc3fb716ad73554defc724a87566e04a61271d6048165b8c97478dc3a1540: Status 404 returned error can't find the container with id 4e8bc3fb716ad73554defc724a87566e04a61271d6048165b8c97478dc3a1540 Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.888553 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.888927 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.388905541 +0000 UTC m=+150.244531087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.978569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" event={"ID":"b77d8e1c-b943-452f-b1b0-21ac07685158","Type":"ContainerStarted","Data":"4bdad201f62840a9ee8e54098f184b4693670a83c030e1f7d40338aa277d0690"} Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.984819 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerStarted","Data":"4e8bc3fb716ad73554defc724a87566e04a61271d6048165b8c97478dc3a1540"} Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.986070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnk8c" event={"ID":"b25c8152-1fa8-48e2-9e86-32fdbd9fd0cd","Type":"ContainerStarted","Data":"843b2f3b438ee7dd78579bcaccb6ad0a398e5c48a91fecc2958e448cfa6d52f8"} Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.988764 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" event={"ID":"b5843997-ba37-4e10-ae40-67c335d91321","Type":"ContainerStarted","Data":"33dec7f34181638645bdabf1f8b708bedbf1587126d93f66fabd05b3da698fa3"} Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.990203 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:35 crc kubenswrapper[4720]: E0202 08:58:35.990687 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.490657537 +0000 UTC m=+150.346283093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:35 crc kubenswrapper[4720]: I0202 08:58:35.992546 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerStarted","Data":"3daf884fd9a3d7144a6aa3840707f8c3fed4901338b90ac18c7df06aca4bb864"} Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.006639 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-db26k" podStartSLOduration=128.006621784 podStartE2EDuration="2m8.006621784s" podCreationTimestamp="2026-02-02 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:36.004437316 +0000 UTC m=+149.860062872" watchObservedRunningTime="2026-02-02 08:58:36.006621784 +0000 UTC m=+149.862247340" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.029926 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" event={"ID":"0347bb51-bfe3-44a6-be39-3c4f0eb8d91c","Type":"ContainerStarted","Data":"a4fed55b3ced6f45140d558e3396f253b2f805a1afe965347adfe0c01e710ff0"} Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.034522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" event={"ID":"1de05e78-bbe5-4a95-85f7-323e2e1c76f3","Type":"ContainerStarted","Data":"8934af786a6d541e3436d6ede7a8522f30c16ae9a5349c121ee6674915fdbc47"} Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.092955 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.093290 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.593253125 +0000 UTC m=+150.448878681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.093939 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.095621 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5r75z" podStartSLOduration=129.095592228 podStartE2EDuration="2m9.095592228s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:36.077806541 +0000 UTC m=+149.933432097" watchObservedRunningTime="2026-02-02 08:58:36.095592228 +0000 UTC m=+149.951217784" Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.097407 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.597392135 +0000 UTC m=+150.453017851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.109563 4720 csr.go:261] certificate signing request csr-w2wcd is approved, waiting to be issued Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.124919 4720 csr.go:257] certificate signing request csr-w2wcd is issued Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.144930 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m5lr9" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.205104 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.206809 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.706786556 +0000 UTC m=+150.562412112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.313685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.314497 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.81448058 +0000 UTC m=+150.670106126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.420708 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.421076 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:36.921053225 +0000 UTC m=+150.776678771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.423456 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6hhb"] Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.476731 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jwrgw"] Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.486971 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.495806 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.511811 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:36 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:36 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:36 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.512024 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.521807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.522226 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.022213864 +0000 UTC m=+150.877839410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.561629 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwrgw"] Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.636233 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.636841 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4z68\" (UniqueName: \"kubernetes.io/projected/f00de3c0-345f-4bba-a14e-7f7f351b2d23-kube-api-access-k4z68\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.636911 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-catalog-content\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.636929 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-utilities\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.637038 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.137007709 +0000 UTC m=+150.992633265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.722683 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rqfx"] Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.738059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-catalog-content\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.738098 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-utilities\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.738159 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.738194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4z68\" (UniqueName: \"kubernetes.io/projected/f00de3c0-345f-4bba-a14e-7f7f351b2d23-kube-api-access-k4z68\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.738782 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-utilities\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.738793 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.238776165 +0000 UTC m=+151.094401721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.739055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-catalog-content\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.762919 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p5x48" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.771383 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4z68\" (UniqueName: \"kubernetes.io/projected/f00de3c0-345f-4bba-a14e-7f7f351b2d23-kube-api-access-k4z68\") pod \"redhat-marketplace-jwrgw\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.845677 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.846479 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.346461399 +0000 UTC m=+151.202086955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.846797 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.864787 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4tb2"] Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.866400 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.881771 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4tb2"] Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.943210 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rn4r6" Feb 02 08:58:36 crc kubenswrapper[4720]: I0202 08:58:36.948229 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:36 crc kubenswrapper[4720]: E0202 08:58:36.948596 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.448581095 +0000 UTC m=+151.304206651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.064329 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.064553 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.5645088 +0000 UTC m=+151.420134356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.064815 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.081271 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.581235498 +0000 UTC m=+151.436861044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.089194 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vph77\" (UniqueName: \"kubernetes.io/projected/f9ec0be7-92c5-42d0-9537-555b855f0bc8-kube-api-access-vph77\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.089372 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-utilities\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.089488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-catalog-content\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.095218 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqfx" event={"ID":"d24cad55-5a84-4608-80ad-c6242f4650c7","Type":"ContainerStarted","Data":"8ca7366458b1933beb30b94cbf3c3be4064b67cde8c1c1027216419d8fc800e9"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.130670 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 08:53:36 +0000 UTC, rotation deadline is 2026-10-19 09:29:47.22239172 +0000 UTC Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.130830 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6216h31m10.091565614s for next certificate rotation Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.131553 4720 generic.go:334] "Generic (PLEG): container finished" podID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerID="48487db390721d736c87a880f3efb2a91d2ac59505d62b65146db19a46d9c6ba" exitCode=0 Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.131756 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6hhb" event={"ID":"ca3b9b1b-3887-4562-a43f-7adf77aa3a43","Type":"ContainerDied","Data":"48487db390721d736c87a880f3efb2a91d2ac59505d62b65146db19a46d9c6ba"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.131845 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6hhb" event={"ID":"ca3b9b1b-3887-4562-a43f-7adf77aa3a43","Type":"ContainerStarted","Data":"12876dd53053f6d7b39e7da0ff6480df973d6d20c2a5f319d53dd2593b72e365"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.139564 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.159098 4720 generic.go:334] "Generic (PLEG): container finished" podID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerID="b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4" exitCode=0 Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.159303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerDied","Data":"b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.182608 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f73ea0f83327c81512d97480795b0293bfddde1a318c73e035c11d53f98431ef"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.182810 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"410bdc48513be6c64563bb340210695f2a601754f46d40866458a706fbaf247d"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.190419 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.190768 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vph77\" (UniqueName: \"kubernetes.io/projected/f9ec0be7-92c5-42d0-9537-555b855f0bc8-kube-api-access-vph77\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.190909 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-utilities\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.191036 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-catalog-content\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.192279 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.692262952 +0000 UTC m=+151.547888508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.192856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-catalog-content\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.193212 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-utilities\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.196664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e4387ce1e5f6d001d1f750714c9761a476b33d42a18b7687ccd60adf91d05ea8"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.196712 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e2dff19130a5174a38ebd5fe4449c8cd8c2f830a615c3a3bf332c87b096ad754"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.215350 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"706953f8e7a43ff4bb14956ccc0fbb9e79f21a8b8ee491752f1336e933a502ec"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.215403 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"093577e80017c571e3fb7ef75c4d7ecce95e6b6a4cb9fd8008d1f9c93acf64d2"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.216122 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.228581 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vph77\" (UniqueName: \"kubernetes.io/projected/f9ec0be7-92c5-42d0-9537-555b855f0bc8-kube-api-access-vph77\") pod \"redhat-marketplace-r4tb2\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.257113 4720 generic.go:334] "Generic (PLEG): container finished" podID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerID="014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c" exitCode=0 Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.257251 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerDied","Data":"014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c"} Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.285440 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.301624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.303650 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.803631715 +0000 UTC m=+151.659257261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.404774 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.405970 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:37.905943376 +0000 UTC m=+151.761568922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.504501 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwrgw"] Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.508749 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.509273 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.009246432 +0000 UTC m=+151.864871988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.515192 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:37 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:37 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:37 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.515276 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:37 crc kubenswrapper[4720]: W0202 08:58:37.532418 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00de3c0_345f_4bba_a14e_7f7f351b2d23.slice/crio-7ba76fbffc18a5d94d7b82840ba472303bdd60c718fdefa21eb9052c349f54b8 WatchSource:0}: Error finding container 7ba76fbffc18a5d94d7b82840ba472303bdd60c718fdefa21eb9052c349f54b8: Status 404 returned error can't find the container with id 7ba76fbffc18a5d94d7b82840ba472303bdd60c718fdefa21eb9052c349f54b8 Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.610574 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.610830 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.110774082 +0000 UTC m=+151.966399638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.611255 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.611651 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.111637975 +0000 UTC m=+151.967263521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.713393 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.713533 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.213505523 +0000 UTC m=+152.069131079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.714037 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.714534 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.214526291 +0000 UTC m=+152.070151847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.737624 4720 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.815381 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.815724 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.315624758 +0000 UTC m=+152.171250314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.816013 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.816458 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.316439351 +0000 UTC m=+152.172064907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.854246 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-69l4c"] Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.855482 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.857525 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.869999 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69l4c"] Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.917372 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.917797 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-utilities\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.917860 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-catalog-content\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:37 crc kubenswrapper[4720]: I0202 08:58:37.917970 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26x9\" (UniqueName: \"kubernetes.io/projected/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-kube-api-access-v26x9\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:37 crc kubenswrapper[4720]: E0202 08:58:37.918200 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.418173025 +0000 UTC m=+152.273798631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:37.999946 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4tb2"] Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.019692 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-utilities\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.019822 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-catalog-content\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.019925 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v26x9\" (UniqueName: \"kubernetes.io/projected/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-kube-api-access-v26x9\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.019995 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.020387 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-utilities\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: E0202 08:58:38.020468 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.520450424 +0000 UTC m=+152.376075990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bqg82" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.020714 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-catalog-content\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.068972 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26x9\" (UniqueName: \"kubernetes.io/projected/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-kube-api-access-v26x9\") pod \"redhat-operators-69l4c\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.093242 4720 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T08:58:37.737667581Z","Handler":null,"Name":""} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.126939 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:38 crc kubenswrapper[4720]: E0202 08:58:38.128426 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 08:58:38.628396667 +0000 UTC m=+152.484022223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.156641 4720 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.156700 4720 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.230131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.231558 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.233813 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.233844 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.259271 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.263316 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wlmbw"] Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.265365 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.268282 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5kntx" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.282264 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bqg82\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.283969 4720 generic.go:334] "Generic (PLEG): container finished" podID="37615bea-3d49-45d6-b190-450e2e078977" containerID="384d267a4ffc7043539cbe62adf85065eab6bdf2cc90b9895fbb7f032a7a7b8d" exitCode=0 Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.284034 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" event={"ID":"37615bea-3d49-45d6-b190-450e2e078977","Type":"ContainerDied","Data":"384d267a4ffc7043539cbe62adf85065eab6bdf2cc90b9895fbb7f032a7a7b8d"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.299248 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerStarted","Data":"a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.299306 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerStarted","Data":"b077ebe7dd7dbe07faf1664aba02e1d3a7a351726db3bd80be3be9d78b4390a0"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.300709 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wlmbw"] Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.318362 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.321811 4720 generic.go:334] "Generic (PLEG): container finished" podID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerID="1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266" exitCode=0 Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.321929 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwrgw" event={"ID":"f00de3c0-345f-4bba-a14e-7f7f351b2d23","Type":"ContainerDied","Data":"1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.321961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwrgw" event={"ID":"f00de3c0-345f-4bba-a14e-7f7f351b2d23","Type":"ContainerStarted","Data":"7ba76fbffc18a5d94d7b82840ba472303bdd60c718fdefa21eb9052c349f54b8"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.332602 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.332953 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-utilities\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.332983 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-catalog-content\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.333075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxbn\" (UniqueName: \"kubernetes.io/projected/5b80e748-f510-4af7-ad42-85aa4763150d-kube-api-access-7dxbn\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.375292 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" event={"ID":"b5843997-ba37-4e10-ae40-67c335d91321","Type":"ContainerStarted","Data":"f7f1b3dea8e86ac0734f95b78fa42a368df038f07425256e2f3c9a56124da709"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.375342 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" event={"ID":"b5843997-ba37-4e10-ae40-67c335d91321","Type":"ContainerStarted","Data":"32101484653aa1279fcbdda77c2db1b4cff4f15c090ac07a548f8db5cc10ee7f"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.376454 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.378500 4720 generic.go:334] "Generic (PLEG): container finished" podID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerID="1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896" exitCode=0 Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.378996 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqfx" event={"ID":"d24cad55-5a84-4608-80ad-c6242f4650c7","Type":"ContainerDied","Data":"1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896"} Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.435543 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxbn\" (UniqueName: \"kubernetes.io/projected/5b80e748-f510-4af7-ad42-85aa4763150d-kube-api-access-7dxbn\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.435748 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-utilities\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.435768 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-catalog-content\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.439917 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-utilities\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.440227 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-catalog-content\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.501187 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxbn\" (UniqueName: \"kubernetes.io/projected/5b80e748-f510-4af7-ad42-85aa4763150d-kube-api-access-7dxbn\") pod \"redhat-operators-wlmbw\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.507506 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:38 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:38 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:38 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.507574 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.523657 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" podStartSLOduration=12.523620682 podStartE2EDuration="12.523620682s" podCreationTimestamp="2026-02-02 08:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:38.507170091 +0000 UTC m=+152.362795667" watchObservedRunningTime="2026-02-02 08:58:38.523620682 +0000 UTC m=+152.379246238" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.694650 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.728993 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bqg82"] Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.795403 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69l4c"] Feb 02 08:58:38 crc kubenswrapper[4720]: I0202 08:58:38.910796 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.011140 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.011704 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.032076 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-b96zd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.032145 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-b96zd" podUID="18774b0b-cedf-47b3-9113-5531e4c256f0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.032220 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-b96zd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.032403 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b96zd" podUID="18774b0b-cedf-47b3-9113-5531e4c256f0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.043153 4720 patch_prober.go:28] interesting pod/console-f9d7485db-9pczc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.043206 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9pczc" podUID="9e7ec368-a244-4b1c-a313-987332c21d0e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.219400 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wlmbw"] Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.329813 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.331293 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.333015 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.334394 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.348449 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.424088 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" event={"ID":"d4f92bb0-73fe-45d5-870b-a63931a4ef12","Type":"ContainerStarted","Data":"aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.424146 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" event={"ID":"d4f92bb0-73fe-45d5-870b-a63931a4ef12","Type":"ContainerStarted","Data":"3493fdebb5dddae98bf1c3507acf3ce4458a75b62da549873301e3d3963e60b0"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.424438 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.428551 4720 generic.go:334] "Generic (PLEG): container finished" podID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerID="a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b" exitCode=0 Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.428631 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerDied","Data":"a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.451268 4720 generic.go:334] "Generic (PLEG): container finished" podID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerID="c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7" exitCode=0 Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.451360 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerDied","Data":"c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.451393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerStarted","Data":"4e45f54b2c2e088f64e02cf6f33211a7f5e8b9a85d2ff7c98afa0881558ef6eb"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.456076 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerStarted","Data":"033f701f01e138712116af7d7614eaebee811398e13d273eb2a37b089f6a5a2e"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.459234 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fa859d6-01f4-438c-897d-81f6b78a022e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.459298 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fa859d6-01f4-438c-897d-81f6b78a022e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.470443 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" podStartSLOduration=132.470425713 podStartE2EDuration="2m12.470425713s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:39.46324076 +0000 UTC m=+153.318866306" watchObservedRunningTime="2026-02-02 08:58:39.470425713 +0000 UTC m=+153.326051269" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.483217 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-65c4r" event={"ID":"b5843997-ba37-4e10-ae40-67c335d91321","Type":"ContainerStarted","Data":"6c1cd2b48cf1a5ed358dd5c3504bb6c2e3386ffa26e095b568d4f9ca720e100e"} Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.510908 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.518867 4720 patch_prober.go:28] interesting pod/router-default-5444994796-kbf24 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 08:58:39 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Feb 02 08:58:39 crc kubenswrapper[4720]: [+]process-running ok Feb 02 08:58:39 crc kubenswrapper[4720]: healthz check failed Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.518947 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kbf24" podUID="c1b41d56-2810-41ac-b2d6-b81b14389e44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.560347 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fa859d6-01f4-438c-897d-81f6b78a022e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.560434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fa859d6-01f4-438c-897d-81f6b78a022e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.563824 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fa859d6-01f4-438c-897d-81f6b78a022e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.597735 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fa859d6-01f4-438c-897d-81f6b78a022e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.657203 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.906167 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.966622 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37615bea-3d49-45d6-b190-450e2e078977-secret-volume\") pod \"37615bea-3d49-45d6-b190-450e2e078977\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.966726 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37615bea-3d49-45d6-b190-450e2e078977-config-volume\") pod \"37615bea-3d49-45d6-b190-450e2e078977\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.966796 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qz5\" (UniqueName: \"kubernetes.io/projected/37615bea-3d49-45d6-b190-450e2e078977-kube-api-access-64qz5\") pod \"37615bea-3d49-45d6-b190-450e2e078977\" (UID: \"37615bea-3d49-45d6-b190-450e2e078977\") " Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.971756 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37615bea-3d49-45d6-b190-450e2e078977-config-volume" (OuterVolumeSpecName: "config-volume") pod "37615bea-3d49-45d6-b190-450e2e078977" (UID: "37615bea-3d49-45d6-b190-450e2e078977"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.986257 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37615bea-3d49-45d6-b190-450e2e078977-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37615bea-3d49-45d6-b190-450e2e078977" (UID: "37615bea-3d49-45d6-b190-450e2e078977"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:58:39 crc kubenswrapper[4720]: I0202 08:58:39.999554 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37615bea-3d49-45d6-b190-450e2e078977-kube-api-access-64qz5" (OuterVolumeSpecName: "kube-api-access-64qz5") pod "37615bea-3d49-45d6-b190-450e2e078977" (UID: "37615bea-3d49-45d6-b190-450e2e078977"). InnerVolumeSpecName "kube-api-access-64qz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.068831 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37615bea-3d49-45d6-b190-450e2e078977-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.068894 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37615bea-3d49-45d6-b190-450e2e078977-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.068916 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qz5\" (UniqueName: \"kubernetes.io/projected/37615bea-3d49-45d6-b190-450e2e078977-kube-api-access-64qz5\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.105232 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.513862 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.515717 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5fa859d6-01f4-438c-897d-81f6b78a022e","Type":"ContainerStarted","Data":"fd0cba655c4da3826c604682e4ee084b6dac9975bae3b62ff72fc9b0be8b1fa7"} Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.521594 4720 generic.go:334] "Generic (PLEG): container finished" podID="5b80e748-f510-4af7-ad42-85aa4763150d" containerID="9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9" exitCode=0 Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.521694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerDied","Data":"9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9"} Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.574102 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.578620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp" event={"ID":"37615bea-3d49-45d6-b190-450e2e078977","Type":"ContainerDied","Data":"26c0295f25d532cd803612227cb5751c52495a86f4cda77ff6c2742311d2bbd2"} Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.578666 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26c0295f25d532cd803612227cb5751c52495a86f4cda77ff6c2742311d2bbd2" Feb 02 08:58:40 crc kubenswrapper[4720]: I0202 08:58:40.585122 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kbf24" Feb 02 08:58:41 crc kubenswrapper[4720]: I0202 08:58:41.587465 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5fa859d6-01f4-438c-897d-81f6b78a022e","Type":"ContainerStarted","Data":"5c48d1a10a9b077b3cb6b71ebff69ede4ef9a0d156f8309697a80f16687090ec"} Feb 02 08:58:41 crc kubenswrapper[4720]: I0202 08:58:41.641051 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.6410288619999998 podStartE2EDuration="2.641028862s" podCreationTimestamp="2026-02-02 08:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:58:41.636938182 +0000 UTC m=+155.492563748" watchObservedRunningTime="2026-02-02 08:58:41.641028862 +0000 UTC m=+155.496654418" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.557037 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.648803 4720 generic.go:334] "Generic (PLEG): container finished" podID="5fa859d6-01f4-438c-897d-81f6b78a022e" containerID="5c48d1a10a9b077b3cb6b71ebff69ede4ef9a0d156f8309697a80f16687090ec" exitCode=0 Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.648855 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5fa859d6-01f4-438c-897d-81f6b78a022e","Type":"ContainerDied","Data":"5c48d1a10a9b077b3cb6b71ebff69ede4ef9a0d156f8309697a80f16687090ec"} Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.723367 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 08:58:42 crc kubenswrapper[4720]: E0202 08:58:42.724102 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37615bea-3d49-45d6-b190-450e2e078977" containerName="collect-profiles" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.724119 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="37615bea-3d49-45d6-b190-450e2e078977" containerName="collect-profiles" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.724234 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="37615bea-3d49-45d6-b190-450e2e078977" containerName="collect-profiles" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.724707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.726584 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.727486 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.793732 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.826342 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7042a9d7-56be-4607-869a-e68bb13741d9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.826405 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7042a9d7-56be-4607-869a-e68bb13741d9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.927607 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7042a9d7-56be-4607-869a-e68bb13741d9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.927682 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7042a9d7-56be-4607-869a-e68bb13741d9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.928294 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7042a9d7-56be-4607-869a-e68bb13741d9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:42 crc kubenswrapper[4720]: I0202 08:58:42.956573 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7042a9d7-56be-4607-869a-e68bb13741d9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:43 crc kubenswrapper[4720]: I0202 08:58:43.050446 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:43 crc kubenswrapper[4720]: I0202 08:58:43.713934 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.158145 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.256570 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fa859d6-01f4-438c-897d-81f6b78a022e-kube-api-access\") pod \"5fa859d6-01f4-438c-897d-81f6b78a022e\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.256678 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fa859d6-01f4-438c-897d-81f6b78a022e-kubelet-dir\") pod \"5fa859d6-01f4-438c-897d-81f6b78a022e\" (UID: \"5fa859d6-01f4-438c-897d-81f6b78a022e\") " Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.256908 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa859d6-01f4-438c-897d-81f6b78a022e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5fa859d6-01f4-438c-897d-81f6b78a022e" (UID: "5fa859d6-01f4-438c-897d-81f6b78a022e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.257082 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fa859d6-01f4-438c-897d-81f6b78a022e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.269235 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa859d6-01f4-438c-897d-81f6b78a022e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5fa859d6-01f4-438c-897d-81f6b78a022e" (UID: "5fa859d6-01f4-438c-897d-81f6b78a022e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.358130 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5fa859d6-01f4-438c-897d-81f6b78a022e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.646532 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qrxgd" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.745903 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5fa859d6-01f4-438c-897d-81f6b78a022e","Type":"ContainerDied","Data":"fd0cba655c4da3826c604682e4ee084b6dac9975bae3b62ff72fc9b0be8b1fa7"} Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.746224 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0cba655c4da3826c604682e4ee084b6dac9975bae3b62ff72fc9b0be8b1fa7" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.746289 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 08:58:44 crc kubenswrapper[4720]: I0202 08:58:44.794198 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7042a9d7-56be-4607-869a-e68bb13741d9","Type":"ContainerStarted","Data":"5bdc38286fab9b93da4696c5051765e0a5bf7845ee154879c3c66e5ce43a6204"} Feb 02 08:58:45 crc kubenswrapper[4720]: I0202 08:58:45.826275 4720 generic.go:334] "Generic (PLEG): container finished" podID="7042a9d7-56be-4607-869a-e68bb13741d9" containerID="5008046b52e0aa63d9ca0ea47d7180b657b01edbb5a5395585677ff2dc105a86" exitCode=0 Feb 02 08:58:45 crc kubenswrapper[4720]: I0202 08:58:45.826407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7042a9d7-56be-4607-869a-e68bb13741d9","Type":"ContainerDied","Data":"5008046b52e0aa63d9ca0ea47d7180b657b01edbb5a5395585677ff2dc105a86"} Feb 02 08:58:47 crc kubenswrapper[4720]: I0202 08:58:47.902601 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:58:47 crc kubenswrapper[4720]: I0202 08:58:47.903006 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:58:49 crc kubenswrapper[4720]: I0202 08:58:49.035566 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:49 crc kubenswrapper[4720]: I0202 08:58:49.040678 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 08:58:49 crc kubenswrapper[4720]: I0202 08:58:49.049844 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-b96zd" Feb 02 08:58:50 crc kubenswrapper[4720]: I0202 08:58:50.385849 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:50 crc kubenswrapper[4720]: I0202 08:58:50.392837 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37eb17d6-3474-4c16-aa20-cc508c7992fc-metrics-certs\") pod \"network-metrics-daemon-9qlsb\" (UID: \"37eb17d6-3474-4c16-aa20-cc508c7992fc\") " pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:50 crc kubenswrapper[4720]: I0202 08:58:50.542126 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9qlsb" Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.804318 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.922932 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7042a9d7-56be-4607-869a-e68bb13741d9","Type":"ContainerDied","Data":"5bdc38286fab9b93da4696c5051765e0a5bf7845ee154879c3c66e5ce43a6204"} Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.922985 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bdc38286fab9b93da4696c5051765e0a5bf7845ee154879c3c66e5ce43a6204" Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.922993 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.963636 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7042a9d7-56be-4607-869a-e68bb13741d9-kubelet-dir\") pod \"7042a9d7-56be-4607-869a-e68bb13741d9\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.963771 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7042a9d7-56be-4607-869a-e68bb13741d9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7042a9d7-56be-4607-869a-e68bb13741d9" (UID: "7042a9d7-56be-4607-869a-e68bb13741d9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.963850 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7042a9d7-56be-4607-869a-e68bb13741d9-kube-api-access\") pod \"7042a9d7-56be-4607-869a-e68bb13741d9\" (UID: \"7042a9d7-56be-4607-869a-e68bb13741d9\") " Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.964116 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7042a9d7-56be-4607-869a-e68bb13741d9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:54 crc kubenswrapper[4720]: I0202 08:58:54.970204 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7042a9d7-56be-4607-869a-e68bb13741d9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7042a9d7-56be-4607-869a-e68bb13741d9" (UID: "7042a9d7-56be-4607-869a-e68bb13741d9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:58:55 crc kubenswrapper[4720]: I0202 08:58:55.065824 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7042a9d7-56be-4607-869a-e68bb13741d9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:58:58 crc kubenswrapper[4720]: I0202 08:58:58.326282 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.313929 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.314920 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8776z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lrvwk_openshift-marketplace(d422076d-6f6a-42ea-a820-4aa8399e4a8c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.316133 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lrvwk" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.334070 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.334343 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqfbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pkzv8_openshift-marketplace(9d19aa30-4d50-415b-9d62-b913ad57185e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.335624 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pkzv8" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.359935 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.360054 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dxbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wlmbw_openshift-marketplace(5b80e748-f510-4af7-ad42-85aa4763150d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.361231 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.361279 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wlmbw" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.361351 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhpvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7rqfx_openshift-marketplace(d24cad55-5a84-4608-80ad-c6242f4650c7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 08:59:08 crc kubenswrapper[4720]: E0202 08:59:08.362743 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7rqfx" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" Feb 02 08:59:08 crc kubenswrapper[4720]: I0202 08:59:08.671797 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9qlsb"] Feb 02 08:59:08 crc kubenswrapper[4720]: W0202 08:59:08.842548 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37eb17d6_3474_4c16_aa20_cc508c7992fc.slice/crio-ead89bd846bd567d243d9757bc75613f9d8afb84457c3b4b80dbc62a45eb89ed WatchSource:0}: Error finding container ead89bd846bd567d243d9757bc75613f9d8afb84457c3b4b80dbc62a45eb89ed: Status 404 returned error can't find the container with id ead89bd846bd567d243d9757bc75613f9d8afb84457c3b4b80dbc62a45eb89ed Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.011612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerStarted","Data":"4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc"} Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.013348 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" event={"ID":"37eb17d6-3474-4c16-aa20-cc508c7992fc","Type":"ContainerStarted","Data":"ead89bd846bd567d243d9757bc75613f9d8afb84457c3b4b80dbc62a45eb89ed"} Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.016255 4720 generic.go:334] "Generic (PLEG): container finished" podID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerID="a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b" exitCode=0 Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.016372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwrgw" event={"ID":"f00de3c0-345f-4bba-a14e-7f7f351b2d23","Type":"ContainerDied","Data":"a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b"} Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.021353 4720 generic.go:334] "Generic (PLEG): container finished" podID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerID="38be16d5a823ff9ef38e4ed342a904626168081554f67cb74305150e6d6751cf" exitCode=0 Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.021406 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6hhb" event={"ID":"ca3b9b1b-3887-4562-a43f-7adf77aa3a43","Type":"ContainerDied","Data":"38be16d5a823ff9ef38e4ed342a904626168081554f67cb74305150e6d6751cf"} Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.024711 4720 generic.go:334] "Generic (PLEG): container finished" podID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerID="2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15" exitCode=0 Feb 02 08:59:09 crc kubenswrapper[4720]: I0202 08:59:09.025590 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerDied","Data":"2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15"} Feb 02 08:59:09 crc kubenswrapper[4720]: E0202 08:59:09.026767 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7rqfx" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" Feb 02 08:59:09 crc kubenswrapper[4720]: E0202 08:59:09.026816 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wlmbw" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" Feb 02 08:59:09 crc kubenswrapper[4720]: E0202 08:59:09.027101 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pkzv8" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" Feb 02 08:59:09 crc kubenswrapper[4720]: E0202 08:59:09.027334 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lrvwk" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" Feb 02 08:59:10 crc kubenswrapper[4720]: I0202 08:59:10.033602 4720 generic.go:334] "Generic (PLEG): container finished" podID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerID="4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc" exitCode=0 Feb 02 08:59:10 crc kubenswrapper[4720]: I0202 08:59:10.033692 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerDied","Data":"4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc"} Feb 02 08:59:10 crc kubenswrapper[4720]: I0202 08:59:10.035131 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" event={"ID":"37eb17d6-3474-4c16-aa20-cc508c7992fc","Type":"ContainerStarted","Data":"187667a7e0c0cf570d3fe8c72629391363fc7d696c52954502b376fcd5c4e4ce"} Feb 02 08:59:10 crc kubenswrapper[4720]: I0202 08:59:10.088297 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lhrrv" Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.043341 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6hhb" event={"ID":"ca3b9b1b-3887-4562-a43f-7adf77aa3a43","Type":"ContainerStarted","Data":"4a2becc98b6ff2c0bb15b83e57f246070b32a331b9c9541f3844ef167a200711"} Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.048064 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerStarted","Data":"57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4"} Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.050666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerStarted","Data":"4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676"} Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.052859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9qlsb" event={"ID":"37eb17d6-3474-4c16-aa20-cc508c7992fc","Type":"ContainerStarted","Data":"7cea34cbb47a87097ad04b9e8c9080b2513c608848407789f983b87519b77914"} Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.055052 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwrgw" event={"ID":"f00de3c0-345f-4bba-a14e-7f7f351b2d23","Type":"ContainerStarted","Data":"38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d"} Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.064198 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6hhb" podStartSLOduration=4.015554473 podStartE2EDuration="37.064177754s" podCreationTimestamp="2026-02-02 08:58:34 +0000 UTC" firstStartedPulling="2026-02-02 08:58:37.13920562 +0000 UTC m=+150.994831176" lastFinishedPulling="2026-02-02 08:59:10.187828901 +0000 UTC m=+184.043454457" observedRunningTime="2026-02-02 08:59:11.062977482 +0000 UTC m=+184.918603038" watchObservedRunningTime="2026-02-02 08:59:11.064177754 +0000 UTC m=+184.919803310" Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.089094 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jwrgw" podStartSLOduration=3.304717083 podStartE2EDuration="35.08907131s" podCreationTimestamp="2026-02-02 08:58:36 +0000 UTC" firstStartedPulling="2026-02-02 08:58:38.323741128 +0000 UTC m=+152.179366684" lastFinishedPulling="2026-02-02 08:59:10.108095355 +0000 UTC m=+183.963720911" observedRunningTime="2026-02-02 08:59:11.087746105 +0000 UTC m=+184.943371661" watchObservedRunningTime="2026-02-02 08:59:11.08907131 +0000 UTC m=+184.944696856" Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.107693 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-69l4c" podStartSLOduration=3.116506808 podStartE2EDuration="34.107666179s" podCreationTimestamp="2026-02-02 08:58:37 +0000 UTC" firstStartedPulling="2026-02-02 08:58:39.454852005 +0000 UTC m=+153.310477561" lastFinishedPulling="2026-02-02 08:59:10.446011376 +0000 UTC m=+184.301636932" observedRunningTime="2026-02-02 08:59:11.107226366 +0000 UTC m=+184.962851922" watchObservedRunningTime="2026-02-02 08:59:11.107666179 +0000 UTC m=+184.963291735" Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.136123 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4tb2" podStartSLOduration=4.209390577 podStartE2EDuration="35.13609396s" podCreationTimestamp="2026-02-02 08:58:36 +0000 UTC" firstStartedPulling="2026-02-02 08:58:39.441087307 +0000 UTC m=+153.296712863" lastFinishedPulling="2026-02-02 08:59:10.36779069 +0000 UTC m=+184.223416246" observedRunningTime="2026-02-02 08:59:11.131337663 +0000 UTC m=+184.986963219" watchObservedRunningTime="2026-02-02 08:59:11.13609396 +0000 UTC m=+184.991719516" Feb 02 08:59:11 crc kubenswrapper[4720]: I0202 08:59:11.151595 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9qlsb" podStartSLOduration=164.151570264 podStartE2EDuration="2m44.151570264s" podCreationTimestamp="2026-02-02 08:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:59:11.150325151 +0000 UTC m=+185.005950707" watchObservedRunningTime="2026-02-02 08:59:11.151570264 +0000 UTC m=+185.007195820" Feb 02 08:59:15 crc kubenswrapper[4720]: I0202 08:59:15.236503 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 08:59:15 crc kubenswrapper[4720]: I0202 08:59:15.285145 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:59:15 crc kubenswrapper[4720]: I0202 08:59:15.285185 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:59:15 crc kubenswrapper[4720]: I0202 08:59:15.642256 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:59:16 crc kubenswrapper[4720]: I0202 08:59:16.134411 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:59:16 crc kubenswrapper[4720]: I0202 08:59:16.847714 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:59:16 crc kubenswrapper[4720]: I0202 08:59:16.847809 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:59:16 crc kubenswrapper[4720]: I0202 08:59:16.924559 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.137015 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.286506 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.286947 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.342202 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.863519 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6hhb"] Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.905007 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.905143 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:59:17 crc kubenswrapper[4720]: I0202 08:59:17.959784 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lcvpd"] Feb 02 08:59:18 crc kubenswrapper[4720]: I0202 08:59:18.095714 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6hhb" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="registry-server" containerID="cri-o://4a2becc98b6ff2c0bb15b83e57f246070b32a331b9c9541f3844ef167a200711" gracePeriod=2 Feb 02 08:59:18 crc kubenswrapper[4720]: I0202 08:59:18.137294 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:59:18 crc kubenswrapper[4720]: I0202 08:59:18.232847 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:59:18 crc kubenswrapper[4720]: I0202 08:59:18.233034 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:59:18 crc kubenswrapper[4720]: I0202 08:59:18.278956 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:59:19 crc kubenswrapper[4720]: I0202 08:59:19.107785 4720 generic.go:334] "Generic (PLEG): container finished" podID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerID="4a2becc98b6ff2c0bb15b83e57f246070b32a331b9c9541f3844ef167a200711" exitCode=0 Feb 02 08:59:19 crc kubenswrapper[4720]: I0202 08:59:19.107897 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6hhb" event={"ID":"ca3b9b1b-3887-4562-a43f-7adf77aa3a43","Type":"ContainerDied","Data":"4a2becc98b6ff2c0bb15b83e57f246070b32a331b9c9541f3844ef167a200711"} Feb 02 08:59:19 crc kubenswrapper[4720]: I0202 08:59:19.150721 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 08:59:19 crc kubenswrapper[4720]: I0202 08:59:19.259854 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4tb2"] Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.021108 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.119521 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6hhb" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.120564 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6hhb" event={"ID":"ca3b9b1b-3887-4562-a43f-7adf77aa3a43","Type":"ContainerDied","Data":"12876dd53053f6d7b39e7da0ff6480df973d6d20c2a5f319d53dd2593b72e365"} Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.120671 4720 scope.go:117] "RemoveContainer" containerID="4a2becc98b6ff2c0bb15b83e57f246070b32a331b9c9541f3844ef167a200711" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.141814 4720 scope.go:117] "RemoveContainer" containerID="38be16d5a823ff9ef38e4ed342a904626168081554f67cb74305150e6d6751cf" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.138874 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54hl9\" (UniqueName: \"kubernetes.io/projected/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-kube-api-access-54hl9\") pod \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.143084 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-utilities\") pod \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.144631 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-utilities" (OuterVolumeSpecName: "utilities") pod "ca3b9b1b-3887-4562-a43f-7adf77aa3a43" (UID: "ca3b9b1b-3887-4562-a43f-7adf77aa3a43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.144557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-catalog-content\") pod \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\" (UID: \"ca3b9b1b-3887-4562-a43f-7adf77aa3a43\") " Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.148367 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.150304 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-kube-api-access-54hl9" (OuterVolumeSpecName: "kube-api-access-54hl9") pod "ca3b9b1b-3887-4562-a43f-7adf77aa3a43" (UID: "ca3b9b1b-3887-4562-a43f-7adf77aa3a43"). InnerVolumeSpecName "kube-api-access-54hl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.194759 4720 scope.go:117] "RemoveContainer" containerID="48487db390721d736c87a880f3efb2a91d2ac59505d62b65146db19a46d9c6ba" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.206188 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca3b9b1b-3887-4562-a43f-7adf77aa3a43" (UID: "ca3b9b1b-3887-4562-a43f-7adf77aa3a43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.250014 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.250055 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54hl9\" (UniqueName: \"kubernetes.io/projected/ca3b9b1b-3887-4562-a43f-7adf77aa3a43-kube-api-access-54hl9\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.452120 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6hhb"] Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.459254 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6hhb"] Feb 02 08:59:20 crc kubenswrapper[4720]: I0202 08:59:20.896382 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" path="/var/lib/kubelet/pods/ca3b9b1b-3887-4562-a43f-7adf77aa3a43/volumes" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.128923 4720 generic.go:334] "Generic (PLEG): container finished" podID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerID="ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828" exitCode=0 Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.128979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqfx" event={"ID":"d24cad55-5a84-4608-80ad-c6242f4650c7","Type":"ContainerDied","Data":"ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828"} Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.140650 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerStarted","Data":"d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15"} Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.140793 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4tb2" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="registry-server" containerID="cri-o://57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4" gracePeriod=2 Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.520697 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 08:59:21 crc kubenswrapper[4720]: E0202 08:59:21.521799 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa859d6-01f4-438c-897d-81f6b78a022e" containerName="pruner" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.521906 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa859d6-01f4-438c-897d-81f6b78a022e" containerName="pruner" Feb 02 08:59:21 crc kubenswrapper[4720]: E0202 08:59:21.521933 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7042a9d7-56be-4607-869a-e68bb13741d9" containerName="pruner" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.521947 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7042a9d7-56be-4607-869a-e68bb13741d9" containerName="pruner" Feb 02 08:59:21 crc kubenswrapper[4720]: E0202 08:59:21.521971 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="extract-utilities" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.521978 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="extract-utilities" Feb 02 08:59:21 crc kubenswrapper[4720]: E0202 08:59:21.521992 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="registry-server" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.522004 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="registry-server" Feb 02 08:59:21 crc kubenswrapper[4720]: E0202 08:59:21.522019 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="extract-content" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.522025 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="extract-content" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.522258 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa859d6-01f4-438c-897d-81f6b78a022e" containerName="pruner" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.522294 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7042a9d7-56be-4607-869a-e68bb13741d9" containerName="pruner" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.522319 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3b9b1b-3887-4562-a43f-7adf77aa3a43" containerName="registry-server" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.523358 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.526847 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.527375 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.529903 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.556653 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.671174 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-catalog-content\") pod \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.671312 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vph77\" (UniqueName: \"kubernetes.io/projected/f9ec0be7-92c5-42d0-9537-555b855f0bc8-kube-api-access-vph77\") pod \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.671383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-utilities\") pod \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\" (UID: \"f9ec0be7-92c5-42d0-9537-555b855f0bc8\") " Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.671669 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.671705 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.672317 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-utilities" (OuterVolumeSpecName: "utilities") pod "f9ec0be7-92c5-42d0-9537-555b855f0bc8" (UID: "f9ec0be7-92c5-42d0-9537-555b855f0bc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.679911 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ec0be7-92c5-42d0-9537-555b855f0bc8-kube-api-access-vph77" (OuterVolumeSpecName: "kube-api-access-vph77") pod "f9ec0be7-92c5-42d0-9537-555b855f0bc8" (UID: "f9ec0be7-92c5-42d0-9537-555b855f0bc8"). InnerVolumeSpecName "kube-api-access-vph77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.698402 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9ec0be7-92c5-42d0-9537-555b855f0bc8" (UID: "f9ec0be7-92c5-42d0-9537-555b855f0bc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.773029 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.773092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.773104 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.773138 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vph77\" (UniqueName: \"kubernetes.io/projected/f9ec0be7-92c5-42d0-9537-555b855f0bc8-kube-api-access-vph77\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.773154 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.773170 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ec0be7-92c5-42d0-9537-555b855f0bc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.790091 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:21 crc kubenswrapper[4720]: I0202 08:59:21.873255 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.155541 4720 generic.go:334] "Generic (PLEG): container finished" podID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerID="d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15" exitCode=0 Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.155772 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerDied","Data":"d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15"} Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.156274 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerStarted","Data":"7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4"} Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.167402 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqfx" event={"ID":"d24cad55-5a84-4608-80ad-c6242f4650c7","Type":"ContainerStarted","Data":"06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830"} Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.174060 4720 generic.go:334] "Generic (PLEG): container finished" podID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerID="57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4" exitCode=0 Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.174085 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerDied","Data":"57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4"} Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.174103 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4tb2" event={"ID":"f9ec0be7-92c5-42d0-9537-555b855f0bc8","Type":"ContainerDied","Data":"b077ebe7dd7dbe07faf1664aba02e1d3a7a351726db3bd80be3be9d78b4390a0"} Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.174134 4720 scope.go:117] "RemoveContainer" containerID="57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.174251 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4tb2" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.185495 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrvwk" podStartSLOduration=3.790559837 podStartE2EDuration="48.185469529s" podCreationTimestamp="2026-02-02 08:58:34 +0000 UTC" firstStartedPulling="2026-02-02 08:58:37.276737124 +0000 UTC m=+151.132362670" lastFinishedPulling="2026-02-02 08:59:21.671646806 +0000 UTC m=+195.527272362" observedRunningTime="2026-02-02 08:59:22.179726835 +0000 UTC m=+196.035352391" watchObservedRunningTime="2026-02-02 08:59:22.185469529 +0000 UTC m=+196.041095095" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.205796 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7rqfx" podStartSLOduration=4.0055262 podStartE2EDuration="47.205778722s" podCreationTimestamp="2026-02-02 08:58:35 +0000 UTC" firstStartedPulling="2026-02-02 08:58:38.393121827 +0000 UTC m=+152.248747383" lastFinishedPulling="2026-02-02 08:59:21.593374349 +0000 UTC m=+195.448999905" observedRunningTime="2026-02-02 08:59:22.204523469 +0000 UTC m=+196.060149055" watchObservedRunningTime="2026-02-02 08:59:22.205778722 +0000 UTC m=+196.061404268" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.231571 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4tb2"] Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.238729 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4tb2"] Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.241395 4720 scope.go:117] "RemoveContainer" containerID="2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.253958 4720 scope.go:117] "RemoveContainer" containerID="a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.271347 4720 scope.go:117] "RemoveContainer" containerID="57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4" Feb 02 08:59:22 crc kubenswrapper[4720]: E0202 08:59:22.274962 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4\": container with ID starting with 57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4 not found: ID does not exist" containerID="57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.275012 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4"} err="failed to get container status \"57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4\": rpc error: code = NotFound desc = could not find container \"57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4\": container with ID starting with 57f362d9f6706f3f13525a5c22231b960682e311725388f80c53e554c2f04ea4 not found: ID does not exist" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.275068 4720 scope.go:117] "RemoveContainer" containerID="2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15" Feb 02 08:59:22 crc kubenswrapper[4720]: E0202 08:59:22.275409 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15\": container with ID starting with 2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15 not found: ID does not exist" containerID="2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.275433 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15"} err="failed to get container status \"2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15\": rpc error: code = NotFound desc = could not find container \"2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15\": container with ID starting with 2769a360b302b10d8596144b177a1348e1cb400d65c04407459062dca504ec15 not found: ID does not exist" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.275449 4720 scope.go:117] "RemoveContainer" containerID="a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b" Feb 02 08:59:22 crc kubenswrapper[4720]: E0202 08:59:22.278977 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b\": container with ID starting with a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b not found: ID does not exist" containerID="a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.279009 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b"} err="failed to get container status \"a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b\": rpc error: code = NotFound desc = could not find container \"a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b\": container with ID starting with a14eaede9f741c218c4f130d9c2e3a9f748140e55129f8292bc65911f29e677b not found: ID does not exist" Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.307591 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 08:59:22 crc kubenswrapper[4720]: W0202 08:59:22.329282 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd6be2615_59a2_4d0c_ae15_ce615d2e596b.slice/crio-1954bbfdc9053e393d9fd43a486f847e6b615c2d530d13eea597f4049b88596d WatchSource:0}: Error finding container 1954bbfdc9053e393d9fd43a486f847e6b615c2d530d13eea597f4049b88596d: Status 404 returned error can't find the container with id 1954bbfdc9053e393d9fd43a486f847e6b615c2d530d13eea597f4049b88596d Feb 02 08:59:22 crc kubenswrapper[4720]: I0202 08:59:22.898746 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" path="/var/lib/kubelet/pods/f9ec0be7-92c5-42d0-9537-555b855f0bc8/volumes" Feb 02 08:59:23 crc kubenswrapper[4720]: I0202 08:59:23.184042 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6be2615-59a2-4d0c-ae15-ce615d2e596b","Type":"ContainerStarted","Data":"513ea83e091c73dfb203e2f00a5c607fd4d4d8aae580684f451c45ed0878a108"} Feb 02 08:59:23 crc kubenswrapper[4720]: I0202 08:59:23.184130 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6be2615-59a2-4d0c-ae15-ce615d2e596b","Type":"ContainerStarted","Data":"1954bbfdc9053e393d9fd43a486f847e6b615c2d530d13eea597f4049b88596d"} Feb 02 08:59:23 crc kubenswrapper[4720]: I0202 08:59:23.186458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerStarted","Data":"9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421"} Feb 02 08:59:23 crc kubenswrapper[4720]: I0202 08:59:23.207178 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.207156818 podStartE2EDuration="2.207156818s" podCreationTimestamp="2026-02-02 08:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:59:23.205032376 +0000 UTC m=+197.060657942" watchObservedRunningTime="2026-02-02 08:59:23.207156818 +0000 UTC m=+197.062782374" Feb 02 08:59:24 crc kubenswrapper[4720]: I0202 08:59:24.196945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerStarted","Data":"1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156"} Feb 02 08:59:24 crc kubenswrapper[4720]: I0202 08:59:24.198666 4720 generic.go:334] "Generic (PLEG): container finished" podID="d6be2615-59a2-4d0c-ae15-ce615d2e596b" containerID="513ea83e091c73dfb203e2f00a5c607fd4d4d8aae580684f451c45ed0878a108" exitCode=0 Feb 02 08:59:24 crc kubenswrapper[4720]: I0202 08:59:24.198739 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6be2615-59a2-4d0c-ae15-ce615d2e596b","Type":"ContainerDied","Data":"513ea83e091c73dfb203e2f00a5c607fd4d4d8aae580684f451c45ed0878a108"} Feb 02 08:59:24 crc kubenswrapper[4720]: I0202 08:59:24.201660 4720 generic.go:334] "Generic (PLEG): container finished" podID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerID="9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421" exitCode=0 Feb 02 08:59:24 crc kubenswrapper[4720]: I0202 08:59:24.201728 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerDied","Data":"9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421"} Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.050174 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.050701 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.122005 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.210745 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerStarted","Data":"c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380"} Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.214235 4720 generic.go:334] "Generic (PLEG): container finished" podID="5b80e748-f510-4af7-ad42-85aa4763150d" containerID="1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156" exitCode=0 Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.214316 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerDied","Data":"1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156"} Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.245449 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pkzv8" podStartSLOduration=3.745482144 podStartE2EDuration="51.245412312s" podCreationTimestamp="2026-02-02 08:58:34 +0000 UTC" firstStartedPulling="2026-02-02 08:58:37.166046499 +0000 UTC m=+151.021672055" lastFinishedPulling="2026-02-02 08:59:24.665976657 +0000 UTC m=+198.521602223" observedRunningTime="2026-02-02 08:59:25.240475797 +0000 UTC m=+199.096101363" watchObservedRunningTime="2026-02-02 08:59:25.245412312 +0000 UTC m=+199.101037908" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.488739 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.529055 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.529121 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.585772 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.639141 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kubelet-dir\") pod \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.639548 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6be2615-59a2-4d0c-ae15-ce615d2e596b" (UID: "d6be2615-59a2-4d0c-ae15-ce615d2e596b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.639752 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kube-api-access\") pod \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\" (UID: \"d6be2615-59a2-4d0c-ae15-ce615d2e596b\") " Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.640068 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.656423 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6be2615-59a2-4d0c-ae15-ce615d2e596b" (UID: "d6be2615-59a2-4d0c-ae15-ce615d2e596b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:59:25 crc kubenswrapper[4720]: I0202 08:59:25.741021 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6be2615-59a2-4d0c-ae15-ce615d2e596b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:26 crc kubenswrapper[4720]: I0202 08:59:26.224315 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerStarted","Data":"2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb"} Feb 02 08:59:26 crc kubenswrapper[4720]: I0202 08:59:26.226591 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 08:59:26 crc kubenswrapper[4720]: I0202 08:59:26.227257 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6be2615-59a2-4d0c-ae15-ce615d2e596b","Type":"ContainerDied","Data":"1954bbfdc9053e393d9fd43a486f847e6b615c2d530d13eea597f4049b88596d"} Feb 02 08:59:26 crc kubenswrapper[4720]: I0202 08:59:26.227317 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1954bbfdc9053e393d9fd43a486f847e6b615c2d530d13eea597f4049b88596d" Feb 02 08:59:26 crc kubenswrapper[4720]: I0202 08:59:26.272419 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wlmbw" podStartSLOduration=3.182079914 podStartE2EDuration="48.27237927s" podCreationTimestamp="2026-02-02 08:58:38 +0000 UTC" firstStartedPulling="2026-02-02 08:58:40.542433167 +0000 UTC m=+154.398058723" lastFinishedPulling="2026-02-02 08:59:25.632732513 +0000 UTC m=+199.488358079" observedRunningTime="2026-02-02 08:59:26.266092815 +0000 UTC m=+200.121718401" watchObservedRunningTime="2026-02-02 08:59:26.27237927 +0000 UTC m=+200.128004846" Feb 02 08:59:26 crc kubenswrapper[4720]: I0202 08:59:26.290668 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:59:27 crc kubenswrapper[4720]: I0202 08:59:27.660116 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rqfx"] Feb 02 08:59:28 crc kubenswrapper[4720]: I0202 08:59:28.238770 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7rqfx" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="registry-server" containerID="cri-o://06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830" gracePeriod=2 Feb 02 08:59:28 crc kubenswrapper[4720]: I0202 08:59:28.695843 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:59:28 crc kubenswrapper[4720]: I0202 08:59:28.695991 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.247056 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.247676 4720 generic.go:334] "Generic (PLEG): container finished" podID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerID="06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830" exitCode=0 Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.248742 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqfx" event={"ID":"d24cad55-5a84-4608-80ad-c6242f4650c7","Type":"ContainerDied","Data":"06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830"} Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.248786 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rqfx" event={"ID":"d24cad55-5a84-4608-80ad-c6242f4650c7","Type":"ContainerDied","Data":"8ca7366458b1933beb30b94cbf3c3be4064b67cde8c1c1027216419d8fc800e9"} Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.248809 4720 scope.go:117] "RemoveContainer" containerID="06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.275172 4720 scope.go:117] "RemoveContainer" containerID="ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.291787 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhpvv\" (UniqueName: \"kubernetes.io/projected/d24cad55-5a84-4608-80ad-c6242f4650c7-kube-api-access-nhpvv\") pod \"d24cad55-5a84-4608-80ad-c6242f4650c7\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.291916 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-utilities\") pod \"d24cad55-5a84-4608-80ad-c6242f4650c7\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.292022 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-catalog-content\") pod \"d24cad55-5a84-4608-80ad-c6242f4650c7\" (UID: \"d24cad55-5a84-4608-80ad-c6242f4650c7\") " Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.293483 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-utilities" (OuterVolumeSpecName: "utilities") pod "d24cad55-5a84-4608-80ad-c6242f4650c7" (UID: "d24cad55-5a84-4608-80ad-c6242f4650c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.301648 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24cad55-5a84-4608-80ad-c6242f4650c7-kube-api-access-nhpvv" (OuterVolumeSpecName: "kube-api-access-nhpvv") pod "d24cad55-5a84-4608-80ad-c6242f4650c7" (UID: "d24cad55-5a84-4608-80ad-c6242f4650c7"). InnerVolumeSpecName "kube-api-access-nhpvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.307287 4720 scope.go:117] "RemoveContainer" containerID="1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.314755 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321294 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="extract-utilities" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321343 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="extract-utilities" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321367 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="extract-content" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321382 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="extract-content" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321407 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6be2615-59a2-4d0c-ae15-ce615d2e596b" containerName="pruner" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321419 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6be2615-59a2-4d0c-ae15-ce615d2e596b" containerName="pruner" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321442 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="extract-utilities" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321455 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="extract-utilities" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321468 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="registry-server" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321479 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="registry-server" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321497 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="extract-content" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321508 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="extract-content" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.321534 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="registry-server" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321544 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="registry-server" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321750 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ec0be7-92c5-42d0-9537-555b855f0bc8" containerName="registry-server" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321774 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" containerName="registry-server" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.321791 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6be2615-59a2-4d0c-ae15-ce615d2e596b" containerName="pruner" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.325876 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.326137 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.331053 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.331425 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.368505 4720 scope.go:117] "RemoveContainer" containerID="06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.369020 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830\": container with ID starting with 06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830 not found: ID does not exist" containerID="06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.369064 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830"} err="failed to get container status \"06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830\": rpc error: code = NotFound desc = could not find container \"06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830\": container with ID starting with 06455c6040e70fa0f3b157343abb870676b0ba440cc31c45fcad720b0b060830 not found: ID does not exist" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.369094 4720 scope.go:117] "RemoveContainer" containerID="ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.369574 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828\": container with ID starting with ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828 not found: ID does not exist" containerID="ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.369647 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828"} err="failed to get container status \"ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828\": rpc error: code = NotFound desc = could not find container \"ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828\": container with ID starting with ca364bfa38974d82522848a2a8d3d0ad08a8d0a393d539c5354a21bfe0480828 not found: ID does not exist" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.369680 4720 scope.go:117] "RemoveContainer" containerID="1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896" Feb 02 08:59:29 crc kubenswrapper[4720]: E0202 08:59:29.373670 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896\": container with ID starting with 1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896 not found: ID does not exist" containerID="1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.373707 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896"} err="failed to get container status \"1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896\": rpc error: code = NotFound desc = could not find container \"1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896\": container with ID starting with 1a268f7898eb2bb1cd9e2a8ca84ba6682f0ffc70158462ed3ac040441787c896 not found: ID does not exist" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.379674 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d24cad55-5a84-4608-80ad-c6242f4650c7" (UID: "d24cad55-5a84-4608-80ad-c6242f4650c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.394362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.394683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kube-api-access\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.394805 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-var-lock\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.395013 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.395159 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhpvv\" (UniqueName: \"kubernetes.io/projected/d24cad55-5a84-4608-80ad-c6242f4650c7-kube-api-access-nhpvv\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.395256 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24cad55-5a84-4608-80ad-c6242f4650c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.496015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.496078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kube-api-access\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.496108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-var-lock\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.496186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-var-lock\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.496238 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.515347 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kube-api-access\") pod \"installer-9-crc\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.702047 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 08:59:29 crc kubenswrapper[4720]: I0202 08:59:29.762214 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wlmbw" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="registry-server" probeResult="failure" output=< Feb 02 08:59:29 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 08:59:29 crc kubenswrapper[4720]: > Feb 02 08:59:30 crc kubenswrapper[4720]: I0202 08:59:30.201914 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 08:59:30 crc kubenswrapper[4720]: W0202 08:59:30.216918 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcc946dfe_0e74_411f_afd9_fd2ee0e79c58.slice/crio-b6db390294b6a17a0ca691e84aaabb20c530173529f303fd5b376f89496f02e2 WatchSource:0}: Error finding container b6db390294b6a17a0ca691e84aaabb20c530173529f303fd5b376f89496f02e2: Status 404 returned error can't find the container with id b6db390294b6a17a0ca691e84aaabb20c530173529f303fd5b376f89496f02e2 Feb 02 08:59:30 crc kubenswrapper[4720]: I0202 08:59:30.258740 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rqfx" Feb 02 08:59:30 crc kubenswrapper[4720]: I0202 08:59:30.260457 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cc946dfe-0e74-411f-afd9-fd2ee0e79c58","Type":"ContainerStarted","Data":"b6db390294b6a17a0ca691e84aaabb20c530173529f303fd5b376f89496f02e2"} Feb 02 08:59:30 crc kubenswrapper[4720]: I0202 08:59:30.313139 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rqfx"] Feb 02 08:59:30 crc kubenswrapper[4720]: I0202 08:59:30.323796 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7rqfx"] Feb 02 08:59:30 crc kubenswrapper[4720]: I0202 08:59:30.895179 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24cad55-5a84-4608-80ad-c6242f4650c7" path="/var/lib/kubelet/pods/d24cad55-5a84-4608-80ad-c6242f4650c7/volumes" Feb 02 08:59:31 crc kubenswrapper[4720]: I0202 08:59:31.268809 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cc946dfe-0e74-411f-afd9-fd2ee0e79c58","Type":"ContainerStarted","Data":"b3d4e7035646b494c12142ceb1ed5a5dea5ebfe72cefcc935e41ec0366644df2"} Feb 02 08:59:31 crc kubenswrapper[4720]: I0202 08:59:31.299489 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.299464487 podStartE2EDuration="2.299464487s" podCreationTimestamp="2026-02-02 08:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:59:31.29687418 +0000 UTC m=+205.152499756" watchObservedRunningTime="2026-02-02 08:59:31.299464487 +0000 UTC m=+205.155090043" Feb 02 08:59:34 crc kubenswrapper[4720]: I0202 08:59:34.898181 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:59:34 crc kubenswrapper[4720]: I0202 08:59:34.900975 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:59:34 crc kubenswrapper[4720]: I0202 08:59:34.980200 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:59:35 crc kubenswrapper[4720]: I0202 08:59:35.105006 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 08:59:35 crc kubenswrapper[4720]: I0202 08:59:35.362625 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 08:59:38 crc kubenswrapper[4720]: I0202 08:59:38.772167 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:59:38 crc kubenswrapper[4720]: I0202 08:59:38.845799 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.265629 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wlmbw"] Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.267601 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wlmbw" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="registry-server" containerID="cri-o://2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb" gracePeriod=2 Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.707450 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.777986 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-utilities\") pod \"5b80e748-f510-4af7-ad42-85aa4763150d\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.778051 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dxbn\" (UniqueName: \"kubernetes.io/projected/5b80e748-f510-4af7-ad42-85aa4763150d-kube-api-access-7dxbn\") pod \"5b80e748-f510-4af7-ad42-85aa4763150d\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.778127 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-catalog-content\") pod \"5b80e748-f510-4af7-ad42-85aa4763150d\" (UID: \"5b80e748-f510-4af7-ad42-85aa4763150d\") " Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.779150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-utilities" (OuterVolumeSpecName: "utilities") pod "5b80e748-f510-4af7-ad42-85aa4763150d" (UID: "5b80e748-f510-4af7-ad42-85aa4763150d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.789317 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b80e748-f510-4af7-ad42-85aa4763150d-kube-api-access-7dxbn" (OuterVolumeSpecName: "kube-api-access-7dxbn") pod "5b80e748-f510-4af7-ad42-85aa4763150d" (UID: "5b80e748-f510-4af7-ad42-85aa4763150d"). InnerVolumeSpecName "kube-api-access-7dxbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.879861 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.879932 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dxbn\" (UniqueName: \"kubernetes.io/projected/5b80e748-f510-4af7-ad42-85aa4763150d-kube-api-access-7dxbn\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.893800 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b80e748-f510-4af7-ad42-85aa4763150d" (UID: "5b80e748-f510-4af7-ad42-85aa4763150d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:59:41 crc kubenswrapper[4720]: I0202 08:59:41.982097 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b80e748-f510-4af7-ad42-85aa4763150d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.353320 4720 generic.go:334] "Generic (PLEG): container finished" podID="5b80e748-f510-4af7-ad42-85aa4763150d" containerID="2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb" exitCode=0 Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.353386 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerDied","Data":"2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb"} Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.353398 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wlmbw" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.353428 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wlmbw" event={"ID":"5b80e748-f510-4af7-ad42-85aa4763150d","Type":"ContainerDied","Data":"033f701f01e138712116af7d7614eaebee811398e13d273eb2a37b089f6a5a2e"} Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.353458 4720 scope.go:117] "RemoveContainer" containerID="2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.391236 4720 scope.go:117] "RemoveContainer" containerID="1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.407939 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wlmbw"] Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.414081 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wlmbw"] Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.438338 4720 scope.go:117] "RemoveContainer" containerID="9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.463524 4720 scope.go:117] "RemoveContainer" containerID="2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb" Feb 02 08:59:42 crc kubenswrapper[4720]: E0202 08:59:42.464930 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb\": container with ID starting with 2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb not found: ID does not exist" containerID="2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.465253 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb"} err="failed to get container status \"2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb\": rpc error: code = NotFound desc = could not find container \"2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb\": container with ID starting with 2c4c45cb3e14075930ccd5f984d3e4a6f86bac400629677f0e5433fc3495eebb not found: ID does not exist" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.465427 4720 scope.go:117] "RemoveContainer" containerID="1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156" Feb 02 08:59:42 crc kubenswrapper[4720]: E0202 08:59:42.466099 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156\": container with ID starting with 1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156 not found: ID does not exist" containerID="1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.466162 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156"} err="failed to get container status \"1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156\": rpc error: code = NotFound desc = could not find container \"1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156\": container with ID starting with 1e3ce9dea7debcc25ccf8721c9b78fd62465aa36ff5ab9d8d011227c36464156 not found: ID does not exist" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.466201 4720 scope.go:117] "RemoveContainer" containerID="9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9" Feb 02 08:59:42 crc kubenswrapper[4720]: E0202 08:59:42.466857 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9\": container with ID starting with 9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9 not found: ID does not exist" containerID="9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.466934 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9"} err="failed to get container status \"9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9\": rpc error: code = NotFound desc = could not find container \"9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9\": container with ID starting with 9a32ebcab28c7d47fbf1bdbcf0f96f56c3cddeee3566c492baeb96e5876f41e9 not found: ID does not exist" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.894123 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" path="/var/lib/kubelet/pods/5b80e748-f510-4af7-ad42-85aa4763150d/volumes" Feb 02 08:59:42 crc kubenswrapper[4720]: I0202 08:59:42.988671 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" podUID="21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" containerName="oauth-openshift" containerID="cri-o://7f0507baefdd68255014f2c93f95df985ebf86be92cc10249bbfd63cb0410fe8" gracePeriod=15 Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.361863 4720 generic.go:334] "Generic (PLEG): container finished" podID="21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" containerID="7f0507baefdd68255014f2c93f95df985ebf86be92cc10249bbfd63cb0410fe8" exitCode=0 Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.361923 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" event={"ID":"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb","Type":"ContainerDied","Data":"7f0507baefdd68255014f2c93f95df985ebf86be92cc10249bbfd63cb0410fe8"} Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.361949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" event={"ID":"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb","Type":"ContainerDied","Data":"8c9a4bd1734624696b86b69718f7806740b730e4328c8ddec5777a49f0ba9f79"} Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.361962 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9a4bd1734624696b86b69718f7806740b730e4328c8ddec5777a49f0ba9f79" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.364862 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.397838 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-router-certs\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.397901 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-service-ca\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.397941 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4v9z\" (UniqueName: \"kubernetes.io/projected/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-kube-api-access-s4v9z\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.397973 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-error\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.397996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-login\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398026 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-trusted-ca-bundle\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398061 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-cliconfig\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398129 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-ocp-branding-template\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398166 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-session\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-dir\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398246 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-provider-selection\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398269 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-idp-0-file-data\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398299 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-serving-cert\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398327 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-policies\") pod \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\" (UID: \"21f7a8b5-2926-41b2-89fe-6f0b4171f7bb\") " Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398309 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.398532 4720 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.399151 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.399141 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.399195 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.399233 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.404106 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.405061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.405334 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-kube-api-access-s4v9z" (OuterVolumeSpecName: "kube-api-access-s4v9z") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "kube-api-access-s4v9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.405602 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.407017 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.407259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.407641 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.416240 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.420548 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" (UID: "21f7a8b5-2926-41b2-89fe-6f0b4171f7bb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.499981 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500518 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500535 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500550 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500567 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500578 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500593 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500605 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500618 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500632 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4v9z\" (UniqueName: \"kubernetes.io/projected/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-kube-api-access-s4v9z\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500645 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500656 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:43 crc kubenswrapper[4720]: I0202 08:59:43.500667 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.152820 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b554fccb6-d8dcd"] Feb 02 08:59:44 crc kubenswrapper[4720]: E0202 08:59:44.153460 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" containerName="oauth-openshift" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.153481 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" containerName="oauth-openshift" Feb 02 08:59:44 crc kubenswrapper[4720]: E0202 08:59:44.153503 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="extract-content" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.153518 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="extract-content" Feb 02 08:59:44 crc kubenswrapper[4720]: E0202 08:59:44.153533 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="extract-utilities" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.153544 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="extract-utilities" Feb 02 08:59:44 crc kubenswrapper[4720]: E0202 08:59:44.153560 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="registry-server" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.153570 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="registry-server" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.153710 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" containerName="oauth-openshift" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.153730 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b80e748-f510-4af7-ad42-85aa4763150d" containerName="registry-server" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.154343 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.163260 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b554fccb6-d8dcd"] Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.208685 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-audit-dir\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209030 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-session\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-audit-policies\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209647 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209766 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-error\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.209874 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.210011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.210125 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-login\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.210248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.210446 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.210580 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9nk\" (UniqueName: \"kubernetes.io/projected/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-kube-api-access-vh9nk\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312376 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9nk\" (UniqueName: \"kubernetes.io/projected/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-kube-api-access-vh9nk\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312544 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-audit-dir\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312611 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312656 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-session\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312717 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-audit-policies\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312768 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312829 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312865 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312936 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-error\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.312971 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.313015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.313055 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.313172 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-login\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.313246 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.314276 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-audit-dir\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.315704 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.316304 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-audit-policies\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.318807 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.318857 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.320133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.320293 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.322595 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.323509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-error\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.323575 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.324009 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-user-template-login\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.340558 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-session\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.346100 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9nk\" (UniqueName: \"kubernetes.io/projected/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-kube-api-access-vh9nk\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.362749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b554fccb6-d8dcd\" (UID: \"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.368634 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lcvpd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.403217 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lcvpd"] Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.408137 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lcvpd"] Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.470789 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.895350 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f7a8b5-2926-41b2-89fe-6f0b4171f7bb" path="/var/lib/kubelet/pods/21f7a8b5-2926-41b2-89fe-6f0b4171f7bb/volumes" Feb 02 08:59:44 crc kubenswrapper[4720]: I0202 08:59:44.918049 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b554fccb6-d8dcd"] Feb 02 08:59:44 crc kubenswrapper[4720]: W0202 08:59:44.928636 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3b6d0d9_e0a5_42ec_bc6a_ed9d416d4cfa.slice/crio-e55c710bf056714bed87d84cde82354f32a26f4794ec74c352d9b60c1fc5a946 WatchSource:0}: Error finding container e55c710bf056714bed87d84cde82354f32a26f4794ec74c352d9b60c1fc5a946: Status 404 returned error can't find the container with id e55c710bf056714bed87d84cde82354f32a26f4794ec74c352d9b60c1fc5a946 Feb 02 08:59:45 crc kubenswrapper[4720]: I0202 08:59:45.376620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" event={"ID":"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa","Type":"ContainerStarted","Data":"8725cafe08633418f0bf5189951b1dd1487ed7e5f18e6d0516eb32acd6dbd67b"} Feb 02 08:59:45 crc kubenswrapper[4720]: I0202 08:59:45.376935 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:45 crc kubenswrapper[4720]: I0202 08:59:45.376956 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" event={"ID":"f3b6d0d9-e0a5-42ec-bc6a-ed9d416d4cfa","Type":"ContainerStarted","Data":"e55c710bf056714bed87d84cde82354f32a26f4794ec74c352d9b60c1fc5a946"} Feb 02 08:59:45 crc kubenswrapper[4720]: I0202 08:59:45.703491 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" Feb 02 08:59:45 crc kubenswrapper[4720]: I0202 08:59:45.732180 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b554fccb6-d8dcd" podStartSLOduration=28.732162316 podStartE2EDuration="28.732162316s" podCreationTimestamp="2026-02-02 08:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 08:59:45.414654191 +0000 UTC m=+219.270279787" watchObservedRunningTime="2026-02-02 08:59:45.732162316 +0000 UTC m=+219.587787862" Feb 02 08:59:47 crc kubenswrapper[4720]: I0202 08:59:47.902096 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:59:47 crc kubenswrapper[4720]: I0202 08:59:47.902552 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:59:47 crc kubenswrapper[4720]: I0202 08:59:47.902610 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 08:59:47 crc kubenswrapper[4720]: I0202 08:59:47.903472 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:59:47 crc kubenswrapper[4720]: I0202 08:59:47.903543 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7" gracePeriod=600 Feb 02 08:59:48 crc kubenswrapper[4720]: I0202 08:59:48.406273 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7" exitCode=0 Feb 02 08:59:48 crc kubenswrapper[4720]: I0202 08:59:48.406356 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7"} Feb 02 08:59:48 crc kubenswrapper[4720]: I0202 08:59:48.406717 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"cf9cee7501a3e05d287afb0cc452abbcbcb1f5a250c109df332e30e51785cbdc"} Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.138850 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4"] Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.140387 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.145126 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.145750 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.150426 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4"] Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.259828 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e240ae-2392-450b-8913-72694775a55d-config-volume\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.259951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5b4\" (UniqueName: \"kubernetes.io/projected/51e240ae-2392-450b-8913-72694775a55d-kube-api-access-lc5b4\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.260224 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e240ae-2392-450b-8913-72694775a55d-secret-volume\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.362131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e240ae-2392-450b-8913-72694775a55d-config-volume\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.362226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5b4\" (UniqueName: \"kubernetes.io/projected/51e240ae-2392-450b-8913-72694775a55d-kube-api-access-lc5b4\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.362315 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e240ae-2392-450b-8913-72694775a55d-secret-volume\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.363762 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e240ae-2392-450b-8913-72694775a55d-config-volume\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.372585 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e240ae-2392-450b-8913-72694775a55d-secret-volume\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.399509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5b4\" (UniqueName: \"kubernetes.io/projected/51e240ae-2392-450b-8913-72694775a55d-kube-api-access-lc5b4\") pod \"collect-profiles-29500380-klbx4\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.503671 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:00 crc kubenswrapper[4720]: I0202 09:00:00.980675 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4"] Feb 02 09:00:00 crc kubenswrapper[4720]: W0202 09:00:00.987397 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51e240ae_2392_450b_8913_72694775a55d.slice/crio-8c02463b7a23bce1d53c93ed595ddebd6298417c9d947361990d0bfad5babf86 WatchSource:0}: Error finding container 8c02463b7a23bce1d53c93ed595ddebd6298417c9d947361990d0bfad5babf86: Status 404 returned error can't find the container with id 8c02463b7a23bce1d53c93ed595ddebd6298417c9d947361990d0bfad5babf86 Feb 02 09:00:01 crc kubenswrapper[4720]: I0202 09:00:01.499114 4720 generic.go:334] "Generic (PLEG): container finished" podID="51e240ae-2392-450b-8913-72694775a55d" containerID="8cdc5d779f3fc2cabf95ace8867810c435921136415983dd989075473a6a8b39" exitCode=0 Feb 02 09:00:01 crc kubenswrapper[4720]: I0202 09:00:01.499151 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" event={"ID":"51e240ae-2392-450b-8913-72694775a55d","Type":"ContainerDied","Data":"8cdc5d779f3fc2cabf95ace8867810c435921136415983dd989075473a6a8b39"} Feb 02 09:00:01 crc kubenswrapper[4720]: I0202 09:00:01.499174 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" event={"ID":"51e240ae-2392-450b-8913-72694775a55d","Type":"ContainerStarted","Data":"8c02463b7a23bce1d53c93ed595ddebd6298417c9d947361990d0bfad5babf86"} Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.766342 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.897286 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e240ae-2392-450b-8913-72694775a55d-config-volume\") pod \"51e240ae-2392-450b-8913-72694775a55d\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.897400 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc5b4\" (UniqueName: \"kubernetes.io/projected/51e240ae-2392-450b-8913-72694775a55d-kube-api-access-lc5b4\") pod \"51e240ae-2392-450b-8913-72694775a55d\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.897454 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e240ae-2392-450b-8913-72694775a55d-secret-volume\") pod \"51e240ae-2392-450b-8913-72694775a55d\" (UID: \"51e240ae-2392-450b-8913-72694775a55d\") " Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.898367 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e240ae-2392-450b-8913-72694775a55d-config-volume" (OuterVolumeSpecName: "config-volume") pod "51e240ae-2392-450b-8913-72694775a55d" (UID: "51e240ae-2392-450b-8913-72694775a55d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.904017 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e240ae-2392-450b-8913-72694775a55d-kube-api-access-lc5b4" (OuterVolumeSpecName: "kube-api-access-lc5b4") pod "51e240ae-2392-450b-8913-72694775a55d" (UID: "51e240ae-2392-450b-8913-72694775a55d"). InnerVolumeSpecName "kube-api-access-lc5b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.904542 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e240ae-2392-450b-8913-72694775a55d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51e240ae-2392-450b-8913-72694775a55d" (UID: "51e240ae-2392-450b-8913-72694775a55d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.999146 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e240ae-2392-450b-8913-72694775a55d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.999206 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc5b4\" (UniqueName: \"kubernetes.io/projected/51e240ae-2392-450b-8913-72694775a55d-kube-api-access-lc5b4\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:02 crc kubenswrapper[4720]: I0202 09:00:02.999228 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e240ae-2392-450b-8913-72694775a55d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:03 crc kubenswrapper[4720]: I0202 09:00:03.513808 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" event={"ID":"51e240ae-2392-450b-8913-72694775a55d","Type":"ContainerDied","Data":"8c02463b7a23bce1d53c93ed595ddebd6298417c9d947361990d0bfad5babf86"} Feb 02 09:00:03 crc kubenswrapper[4720]: I0202 09:00:03.513852 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c02463b7a23bce1d53c93ed595ddebd6298417c9d947361990d0bfad5babf86" Feb 02 09:00:03 crc kubenswrapper[4720]: I0202 09:00:03.513978 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.367015 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.368095 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e240ae-2392-450b-8913-72694775a55d" containerName="collect-profiles" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.368110 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e240ae-2392-450b-8913-72694775a55d" containerName="collect-profiles" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.368252 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e240ae-2392-450b-8913-72694775a55d" containerName="collect-profiles" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.368755 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.370001 4720 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.370776 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664" gracePeriod=15 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.370840 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4" gracePeriod=15 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.370876 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4" gracePeriod=15 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.370912 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631" gracePeriod=15 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.370869 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc" gracePeriod=15 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372216 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372443 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372461 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372475 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372481 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372490 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372496 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372505 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372512 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372531 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372538 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372547 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372553 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.372563 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372572 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372666 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372677 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372685 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372692 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372698 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.372708 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.396093 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.396187 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.396230 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.396386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.396549 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498072 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498170 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498204 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498214 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498261 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498288 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498323 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498341 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498425 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498522 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.498619 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.544044 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.545395 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.546281 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc" exitCode=0 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.546383 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4" exitCode=0 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.546458 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631" exitCode=0 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.546564 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4" exitCode=2 Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.546356 4720 scope.go:117] "RemoveContainer" containerID="bb061902399a196ae5fc880bf16398cd47eb69f3959cede67d0ff548a3344688" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.600210 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.600309 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.600350 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.600378 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.600349 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.600437 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.689688 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.690314 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.690843 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.691165 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.691464 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:08 crc kubenswrapper[4720]: I0202 09:00:08.691519 4720 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.691812 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Feb 02 09:00:08 crc kubenswrapper[4720]: E0202 09:00:08.892516 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Feb 02 09:00:09 crc kubenswrapper[4720]: E0202 09:00:09.293339 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Feb 02 09:00:09 crc kubenswrapper[4720]: I0202 09:00:09.556048 4720 generic.go:334] "Generic (PLEG): container finished" podID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" containerID="b3d4e7035646b494c12142ceb1ed5a5dea5ebfe72cefcc935e41ec0366644df2" exitCode=0 Feb 02 09:00:09 crc kubenswrapper[4720]: I0202 09:00:09.556152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cc946dfe-0e74-411f-afd9-fd2ee0e79c58","Type":"ContainerDied","Data":"b3d4e7035646b494c12142ceb1ed5a5dea5ebfe72cefcc935e41ec0366644df2"} Feb 02 09:00:09 crc kubenswrapper[4720]: I0202 09:00:09.557105 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:09 crc kubenswrapper[4720]: I0202 09:00:09.559703 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 09:00:10 crc kubenswrapper[4720]: E0202 09:00:10.094646 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.753030 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.753992 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.754498 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.754662 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.755193 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.755439 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.932816 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.932928 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-var-lock\") pod \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.932975 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933004 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933063 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kubelet-dir\") pod \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933093 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-var-lock" (OuterVolumeSpecName: "var-lock") pod "cc946dfe-0e74-411f-afd9-fd2ee0e79c58" (UID: "cc946dfe-0e74-411f-afd9-fd2ee0e79c58"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933122 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kube-api-access\") pod \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\" (UID: \"cc946dfe-0e74-411f-afd9-fd2ee0e79c58\") " Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933136 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933156 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933173 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933190 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cc946dfe-0e74-411f-afd9-fd2ee0e79c58" (UID: "cc946dfe-0e74-411f-afd9-fd2ee0e79c58"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933461 4720 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933486 4720 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933505 4720 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933524 4720 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.933542 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:10 crc kubenswrapper[4720]: I0202 09:00:10.942133 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cc946dfe-0e74-411f-afd9-fd2ee0e79c58" (UID: "cc946dfe-0e74-411f-afd9-fd2ee0e79c58"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.034818 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc946dfe-0e74-411f-afd9-fd2ee0e79c58-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.574664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cc946dfe-0e74-411f-afd9-fd2ee0e79c58","Type":"ContainerDied","Data":"b6db390294b6a17a0ca691e84aaabb20c530173529f303fd5b376f89496f02e2"} Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.575145 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6db390294b6a17a0ca691e84aaabb20c530173529f303fd5b376f89496f02e2" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.575974 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.579083 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.580181 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664" exitCode=0 Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.580249 4720 scope.go:117] "RemoveContainer" containerID="b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.580434 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.581161 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.581558 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.597471 4720 scope.go:117] "RemoveContainer" containerID="4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.607011 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.607807 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.612485 4720 scope.go:117] "RemoveContainer" containerID="0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.618101 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.618790 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.628950 4720 scope.go:117] "RemoveContainer" containerID="82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.652452 4720 scope.go:117] "RemoveContainer" containerID="e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.678138 4720 scope.go:117] "RemoveContainer" containerID="24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.695767 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.701972 4720 scope.go:117] "RemoveContainer" containerID="b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.702501 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\": container with ID starting with b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc not found: ID does not exist" containerID="b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.702551 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc"} err="failed to get container status \"b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\": rpc error: code = NotFound desc = could not find container \"b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc\": container with ID starting with b3a332d19840c5e3c6016749982e70e9fae49596a8c491715967535a0c8434cc not found: ID does not exist" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.702590 4720 scope.go:117] "RemoveContainer" containerID="4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.703164 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\": container with ID starting with 4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4 not found: ID does not exist" containerID="4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.703200 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4"} err="failed to get container status \"4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\": rpc error: code = NotFound desc = could not find container \"4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4\": container with ID starting with 4bb27b090511ca616a99f04d6f74958c013e4d3f22effab268647e3ca5b606c4 not found: ID does not exist" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.703227 4720 scope.go:117] "RemoveContainer" containerID="0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.703736 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\": container with ID starting with 0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631 not found: ID does not exist" containerID="0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.703763 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631"} err="failed to get container status \"0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\": rpc error: code = NotFound desc = could not find container \"0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631\": container with ID starting with 0e57be26a4922400f76e4f36e0ddb8e2e4a5f73e7bed2874feb766e75873b631 not found: ID does not exist" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.703779 4720 scope.go:117] "RemoveContainer" containerID="82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.704200 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\": container with ID starting with 82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4 not found: ID does not exist" containerID="82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.704226 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4"} err="failed to get container status \"82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\": rpc error: code = NotFound desc = could not find container \"82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4\": container with ID starting with 82b62c115ada92293a16c6dae9dff99e5f0e682815ad3b1718472513a2e5b8a4 not found: ID does not exist" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.704245 4720 scope.go:117] "RemoveContainer" containerID="e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.704596 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\": container with ID starting with e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664 not found: ID does not exist" containerID="e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.704616 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664"} err="failed to get container status \"e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\": rpc error: code = NotFound desc = could not find container \"e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664\": container with ID starting with e2eb24dc9154a4ed04159278c7ed7c9fde3bba86e8ebdd9a9986a729cf881664 not found: ID does not exist" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.704632 4720 scope.go:117] "RemoveContainer" containerID="24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7" Feb 02 09:00:11 crc kubenswrapper[4720]: E0202 09:00:11.704906 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\": container with ID starting with 24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7 not found: ID does not exist" containerID="24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7" Feb 02 09:00:11 crc kubenswrapper[4720]: I0202 09:00:11.704927 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7"} err="failed to get container status \"24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\": rpc error: code = NotFound desc = could not find container \"24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7\": container with ID starting with 24a602fb77d5db99fe0085b3aca6823e127e88a255cfc7cd165e008e1a4fbbc7 not found: ID does not exist" Feb 02 09:00:12 crc kubenswrapper[4720]: I0202 09:00:12.896472 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 09:00:13 crc kubenswrapper[4720]: E0202 09:00:13.409865 4720 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:13 crc kubenswrapper[4720]: I0202 09:00:13.410375 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:13 crc kubenswrapper[4720]: W0202 09:00:13.434545 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-feb2693ac34a63600ab42f807808c49160ecdf2e8aebf6b228108fc8a0be37c4 WatchSource:0}: Error finding container feb2693ac34a63600ab42f807808c49160ecdf2e8aebf6b228108fc8a0be37c4: Status 404 returned error can't find the container with id feb2693ac34a63600ab42f807808c49160ecdf2e8aebf6b228108fc8a0be37c4 Feb 02 09:00:13 crc kubenswrapper[4720]: E0202 09:00:13.436844 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890625eef388323 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 09:00:13.436511011 +0000 UTC m=+247.292136567,LastTimestamp:2026-02-02 09:00:13.436511011 +0000 UTC m=+247.292136567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 09:00:13 crc kubenswrapper[4720]: I0202 09:00:13.597051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"feb2693ac34a63600ab42f807808c49160ecdf2e8aebf6b228108fc8a0be37c4"} Feb 02 09:00:14 crc kubenswrapper[4720]: I0202 09:00:14.605794 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29"} Feb 02 09:00:14 crc kubenswrapper[4720]: E0202 09:00:14.606459 4720 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:14 crc kubenswrapper[4720]: I0202 09:00:14.606548 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:14 crc kubenswrapper[4720]: E0202 09:00:14.896494 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Feb 02 09:00:15 crc kubenswrapper[4720]: E0202 09:00:15.612868 4720 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:16 crc kubenswrapper[4720]: I0202 09:00:16.892269 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:18 crc kubenswrapper[4720]: E0202 09:00:18.043044 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890625eef388323 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 09:00:13.436511011 +0000 UTC m=+247.292136567,LastTimestamp:2026-02-02 09:00:13.436511011 +0000 UTC m=+247.292136567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 09:00:21 crc kubenswrapper[4720]: E0202 09:00:21.298816 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="7s" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.657693 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.657788 4720 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34" exitCode=1 Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.657841 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34"} Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.658635 4720 scope.go:117] "RemoveContainer" containerID="1606da9b1e5ff7571a5d004e33b48580a5c15e19c46aeedf269c0aa2c127af34" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.659187 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.660093 4720 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.886945 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.888552 4720 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.889643 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.934315 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.934371 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:21 crc kubenswrapper[4720]: E0202 09:00:21.935112 4720 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:21 crc kubenswrapper[4720]: I0202 09:00:21.936122 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:21 crc kubenswrapper[4720]: W0202 09:00:21.973358 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-58c238d814cb73552b7b99b16d3c0c5c11c1dd8b7b6cb94f618896a46a429389 WatchSource:0}: Error finding container 58c238d814cb73552b7b99b16d3c0c5c11c1dd8b7b6cb94f618896a46a429389: Status 404 returned error can't find the container with id 58c238d814cb73552b7b99b16d3c0c5c11c1dd8b7b6cb94f618896a46a429389 Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.676063 4720 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f0b31be2bf8176aa327bcdb5e458e68dd14ddcf1cb8b18b0a725b9235a036adb" exitCode=0 Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.676233 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f0b31be2bf8176aa327bcdb5e458e68dd14ddcf1cb8b18b0a725b9235a036adb"} Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.676279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58c238d814cb73552b7b99b16d3c0c5c11c1dd8b7b6cb94f618896a46a429389"} Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.676706 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.676730 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:22 crc kubenswrapper[4720]: E0202 09:00:22.677268 4720 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.677530 4720 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.677838 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.683600 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.683680 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bdd8a094c4ac450e65f5d9f1267a5e202337424f242df2a98903169c18c4e1d0"} Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.684633 4720 status_manager.go:851] "Failed to get status for pod" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:22 crc kubenswrapper[4720]: I0202 09:00:22.685124 4720 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Feb 02 09:00:23 crc kubenswrapper[4720]: I0202 09:00:23.695578 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26e1d29240e07c45cdcc98299f66c046c5b6698d1173c8a1d0c8ac7078422415"} Feb 02 09:00:23 crc kubenswrapper[4720]: I0202 09:00:23.695908 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dc70ec70a3bbe9d6c1fc1a4669330cd93656f48b16adf32e7be42b6bfeb5d59c"} Feb 02 09:00:23 crc kubenswrapper[4720]: I0202 09:00:23.695921 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6b78c54a67065053cc7218a4d79cff9bc38b8a6b7d8800a3460cb1d0dbb59d93"} Feb 02 09:00:24 crc kubenswrapper[4720]: I0202 09:00:24.708634 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"30887c4ce3b8959fbb0ed7bc8c5db9e999c432f487907d63c6cce04b65811f3c"} Feb 02 09:00:24 crc kubenswrapper[4720]: I0202 09:00:24.708701 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cfc11d28b9fca6423bbcb0e0b3057de9f384a0d48162daa808b62648f0f03049"} Feb 02 09:00:24 crc kubenswrapper[4720]: I0202 09:00:24.708990 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:24 crc kubenswrapper[4720]: I0202 09:00:24.709222 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:24 crc kubenswrapper[4720]: I0202 09:00:24.709283 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:25 crc kubenswrapper[4720]: I0202 09:00:25.061904 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 09:00:25 crc kubenswrapper[4720]: I0202 09:00:25.068357 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 09:00:25 crc kubenswrapper[4720]: I0202 09:00:25.716477 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 09:00:26 crc kubenswrapper[4720]: I0202 09:00:26.937268 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:26 crc kubenswrapper[4720]: I0202 09:00:26.937794 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:26 crc kubenswrapper[4720]: I0202 09:00:26.950435 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:29 crc kubenswrapper[4720]: I0202 09:00:29.752480 4720 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:29 crc kubenswrapper[4720]: I0202 09:00:29.865307 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e6ca3ce0-8760-4f9e-a00c-e38258407bf7" Feb 02 09:00:30 crc kubenswrapper[4720]: I0202 09:00:30.757282 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:30 crc kubenswrapper[4720]: I0202 09:00:30.757728 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:30 crc kubenswrapper[4720]: I0202 09:00:30.763179 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e6ca3ce0-8760-4f9e-a00c-e38258407bf7" Feb 02 09:00:30 crc kubenswrapper[4720]: I0202 09:00:30.764513 4720 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://6b78c54a67065053cc7218a4d79cff9bc38b8a6b7d8800a3460cb1d0dbb59d93" Feb 02 09:00:30 crc kubenswrapper[4720]: I0202 09:00:30.764715 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:31 crc kubenswrapper[4720]: I0202 09:00:31.766013 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:31 crc kubenswrapper[4720]: I0202 09:00:31.766066 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="99f2e153-a112-4dea-97b9-a401b1fed68d" Feb 02 09:00:31 crc kubenswrapper[4720]: I0202 09:00:31.771442 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e6ca3ce0-8760-4f9e-a00c-e38258407bf7" Feb 02 09:00:40 crc kubenswrapper[4720]: I0202 09:00:40.019138 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 09:00:40 crc kubenswrapper[4720]: I0202 09:00:40.090743 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 09:00:40 crc kubenswrapper[4720]: I0202 09:00:40.420939 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.080403 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.428248 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.585442 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.613585 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.627753 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.635848 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 09:00:41 crc kubenswrapper[4720]: I0202 09:00:41.647327 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.042485 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.215476 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.303291 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.434370 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.536145 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.658574 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.721030 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 09:00:42 crc kubenswrapper[4720]: I0202 09:00:42.941650 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.019923 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.024503 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.161011 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.174865 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.310874 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.355970 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.379025 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.481516 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.503029 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.563645 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.589356 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.596570 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.746971 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.791483 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.849594 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.904126 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 09:00:43 crc kubenswrapper[4720]: I0202 09:00:43.969248 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.018247 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.163726 4720 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.166356 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.264319 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.273896 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.297822 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.335278 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.336309 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.359794 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.398729 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.420221 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.446845 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.501584 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.503103 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.544660 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.545079 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.583687 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.595076 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.624812 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.651102 4720 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.903106 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.920774 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.978306 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.988290 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 09:00:44 crc kubenswrapper[4720]: I0202 09:00:44.997758 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.041872 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.070733 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.151602 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.175577 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.191125 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.246815 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.276942 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.305616 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.328230 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.339962 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.496345 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.581427 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.593920 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.598565 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.661564 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.664026 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.768943 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.786986 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 09:00:45 crc kubenswrapper[4720]: I0202 09:00:45.863091 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.031957 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.040042 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.071510 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.120408 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.180086 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.180249 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.312631 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.346827 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.353865 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.403924 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.417796 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.634797 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.642640 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.731066 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.812541 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 09:00:46 crc kubenswrapper[4720]: I0202 09:00:46.846037 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.012942 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.028955 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.195302 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.273550 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.332344 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.359009 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.359116 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.375251 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.399026 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.414932 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.422767 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.428508 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.429721 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.484579 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.497956 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.532193 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.549130 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.620842 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.692730 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.714929 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.717878 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.760054 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.762664 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.788319 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.884718 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 09:00:47 crc kubenswrapper[4720]: I0202 09:00:47.900241 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.161656 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.206459 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.337253 4720 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.343209 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.343278 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/marketplace-operator-79b997595-wm6qb"] Feb 02 09:00:48 crc kubenswrapper[4720]: E0202 09:00:48.343742 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" containerName="installer" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.343775 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" containerName="installer" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.343915 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc946dfe-0e74-411f-afd9-fd2ee0e79c58" containerName="installer" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.344374 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkzv8","openshift-marketplace/certified-operators-lrvwk","openshift-marketplace/redhat-operators-69l4c","openshift-marketplace/redhat-marketplace-jwrgw","openshift-marketplace/marketplace-operator-79b997595-qfsxp"] Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.344674 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" podUID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" containerName="marketplace-operator" containerID="cri-o://a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842" gracePeriod=30 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.344857 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.345524 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jwrgw" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="registry-server" containerID="cri-o://38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d" gracePeriod=30 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.345624 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrvwk" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="registry-server" containerID="cri-o://7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4" gracePeriod=30 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.348245 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.364155 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.367205 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.392867 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.392766548 podStartE2EDuration="19.392766548s" podCreationTimestamp="2026-02-02 09:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:00:48.380833004 +0000 UTC m=+282.236458600" watchObservedRunningTime="2026-02-02 09:00:48.392766548 +0000 UTC m=+282.248392144" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.442035 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.486182 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.497865 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.498006 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66hjs\" (UniqueName: \"kubernetes.io/projected/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-kube-api-access-66hjs\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.498062 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.538639 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.599786 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.600286 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.600341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66hjs\" (UniqueName: \"kubernetes.io/projected/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-kube-api-access-66hjs\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.602633 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.619986 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66hjs\" (UniqueName: \"kubernetes.io/projected/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-kube-api-access-66hjs\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.623851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bd2b2ce4-bf90-4cab-b03c-010e17f20ff5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wm6qb\" (UID: \"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.670286 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.690060 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.728022 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.748759 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.756686 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.766393 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.776003 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.784461 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.818655 4720 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.865449 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.898074 4720 generic.go:334] "Generic (PLEG): container finished" podID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerID="7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4" exitCode=0 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.898209 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrvwk" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.902091 4720 generic.go:334] "Generic (PLEG): container finished" podID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" containerID="a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842" exitCode=0 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.902218 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905274 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerDied","Data":"7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4"} Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905330 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrvwk" event={"ID":"d422076d-6f6a-42ea-a820-4aa8399e4a8c","Type":"ContainerDied","Data":"4e8bc3fb716ad73554defc724a87566e04a61271d6048165b8c97478dc3a1540"} Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905347 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" event={"ID":"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e","Type":"ContainerDied","Data":"a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842"} Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qfsxp" event={"ID":"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e","Type":"ContainerDied","Data":"6a3aeb84b87002771641c3b617a543cf03fc0e8276c56a2390aa51892c913b57"} Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905388 4720 scope.go:117] "RemoveContainer" containerID="7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905695 4720 generic.go:334] "Generic (PLEG): container finished" podID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerID="38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d" exitCode=0 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905754 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwrgw" event={"ID":"f00de3c0-345f-4bba-a14e-7f7f351b2d23","Type":"ContainerDied","Data":"38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d"} Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905786 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwrgw" event={"ID":"f00de3c0-345f-4bba-a14e-7f7f351b2d23","Type":"ContainerDied","Data":"7ba76fbffc18a5d94d7b82840ba472303bdd60c718fdefa21eb9052c349f54b8"} Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.905918 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwrgw" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.906177 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pkzv8" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="registry-server" containerID="cri-o://c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380" gracePeriod=30 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.906265 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-69l4c" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="registry-server" containerID="cri-o://4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676" gracePeriod=30 Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922532 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-utilities\") pod \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922595 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8776z\" (UniqueName: \"kubernetes.io/projected/d422076d-6f6a-42ea-a820-4aa8399e4a8c-kube-api-access-8776z\") pod \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922624 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-utilities\") pod \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922676 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-catalog-content\") pod \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922718 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-trusted-ca\") pod \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922797 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-operator-metrics\") pod \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922871 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4z68\" (UniqueName: \"kubernetes.io/projected/f00de3c0-345f-4bba-a14e-7f7f351b2d23-kube-api-access-k4z68\") pod \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\" (UID: \"f00de3c0-345f-4bba-a14e-7f7f351b2d23\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922932 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-catalog-content\") pod \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\" (UID: \"d422076d-6f6a-42ea-a820-4aa8399e4a8c\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.922992 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qdth\" (UniqueName: \"kubernetes.io/projected/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-kube-api-access-8qdth\") pod \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\" (UID: \"9c2d7533-55a5-4fa2-8c6b-fd441d79a21e\") " Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.927252 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-utilities" (OuterVolumeSpecName: "utilities") pod "d422076d-6f6a-42ea-a820-4aa8399e4a8c" (UID: "d422076d-6f6a-42ea-a820-4aa8399e4a8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.928845 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-utilities" (OuterVolumeSpecName: "utilities") pod "f00de3c0-345f-4bba-a14e-7f7f351b2d23" (UID: "f00de3c0-345f-4bba-a14e-7f7f351b2d23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.928897 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" (UID: "9c2d7533-55a5-4fa2-8c6b-fd441d79a21e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.934922 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00de3c0-345f-4bba-a14e-7f7f351b2d23-kube-api-access-k4z68" (OuterVolumeSpecName: "kube-api-access-k4z68") pod "f00de3c0-345f-4bba-a14e-7f7f351b2d23" (UID: "f00de3c0-345f-4bba-a14e-7f7f351b2d23"). InnerVolumeSpecName "kube-api-access-k4z68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.937741 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-kube-api-access-8qdth" (OuterVolumeSpecName: "kube-api-access-8qdth") pod "9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" (UID: "9c2d7533-55a5-4fa2-8c6b-fd441d79a21e"). InnerVolumeSpecName "kube-api-access-8qdth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.938380 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" (UID: "9c2d7533-55a5-4fa2-8c6b-fd441d79a21e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.940895 4720 scope.go:117] "RemoveContainer" containerID="d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.941150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d422076d-6f6a-42ea-a820-4aa8399e4a8c-kube-api-access-8776z" (OuterVolumeSpecName: "kube-api-access-8776z") pod "d422076d-6f6a-42ea-a820-4aa8399e4a8c" (UID: "d422076d-6f6a-42ea-a820-4aa8399e4a8c"). InnerVolumeSpecName "kube-api-access-8776z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.959085 4720 scope.go:117] "RemoveContainer" containerID="014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.967055 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wm6qb"] Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.980004 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f00de3c0-345f-4bba-a14e-7f7f351b2d23" (UID: "f00de3c0-345f-4bba-a14e-7f7f351b2d23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.988838 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 09:00:48 crc kubenswrapper[4720]: I0202 09:00:48.990916 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.000319 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d422076d-6f6a-42ea-a820-4aa8399e4a8c" (UID: "d422076d-6f6a-42ea-a820-4aa8399e4a8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025006 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025037 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025068 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4z68\" (UniqueName: \"kubernetes.io/projected/f00de3c0-345f-4bba-a14e-7f7f351b2d23-kube-api-access-k4z68\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025080 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025090 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qdth\" (UniqueName: \"kubernetes.io/projected/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e-kube-api-access-8qdth\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025101 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d422076d-6f6a-42ea-a820-4aa8399e4a8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025147 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8776z\" (UniqueName: \"kubernetes.io/projected/d422076d-6f6a-42ea-a820-4aa8399e4a8c-kube-api-access-8776z\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025160 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.025169 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00de3c0-345f-4bba-a14e-7f7f351b2d23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.031460 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.060009 4720 scope.go:117] "RemoveContainer" containerID="7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.060811 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4\": container with ID starting with 7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4 not found: ID does not exist" containerID="7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.060867 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4"} err="failed to get container status \"7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4\": rpc error: code = NotFound desc = could not find container \"7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4\": container with ID starting with 7a13c0ef8e8c390877cb3e735ead8b207f7804af29cc3edbecf9ee61afe4d1e4 not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.060930 4720 scope.go:117] "RemoveContainer" containerID="d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.061358 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15\": container with ID starting with d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15 not found: ID does not exist" containerID="d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.061402 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15"} err="failed to get container status \"d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15\": rpc error: code = NotFound desc = could not find container \"d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15\": container with ID starting with d48535c6ca86b41138d04225c776d50d9ff664952ae21f59f5fbbb859740be15 not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.061421 4720 scope.go:117] "RemoveContainer" containerID="014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.062170 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c\": container with ID starting with 014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c not found: ID does not exist" containerID="014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.062205 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c"} err="failed to get container status \"014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c\": rpc error: code = NotFound desc = could not find container \"014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c\": container with ID starting with 014e3e6182ea895d03da60b915677b05d049369bf04e5520d9247e475756f11c not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.062226 4720 scope.go:117] "RemoveContainer" containerID="a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.080957 4720 scope.go:117] "RemoveContainer" containerID="a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.081507 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842\": container with ID starting with a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842 not found: ID does not exist" containerID="a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.081559 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842"} err="failed to get container status \"a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842\": rpc error: code = NotFound desc = could not find container \"a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842\": container with ID starting with a6d167c0bf0ec65d17b47de983bb8db3aa65fe5d47b1748c17288f8ae4601842 not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.081598 4720 scope.go:117] "RemoveContainer" containerID="38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.104547 4720 scope.go:117] "RemoveContainer" containerID="a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.137366 4720 scope.go:117] "RemoveContainer" containerID="1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.160042 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.163026 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.173139 4720 scope.go:117] "RemoveContainer" containerID="38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.173686 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d\": container with ID starting with 38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d not found: ID does not exist" containerID="38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.173739 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d"} err="failed to get container status \"38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d\": rpc error: code = NotFound desc = could not find container \"38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d\": container with ID starting with 38d1e179e36d0bc5caf3e623dfbb53ca71619c7a6f5aa46cf28b565e256e720d not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.173774 4720 scope.go:117] "RemoveContainer" containerID="a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.174470 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b\": container with ID starting with a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b not found: ID does not exist" containerID="a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.174502 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b"} err="failed to get container status \"a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b\": rpc error: code = NotFound desc = could not find container \"a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b\": container with ID starting with a1a9c268555bbe1347208457aaf97a18b7a2cb807f4338062a24a471fb09140b not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.174520 4720 scope.go:117] "RemoveContainer" containerID="1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266" Feb 02 09:00:49 crc kubenswrapper[4720]: E0202 09:00:49.174798 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266\": container with ID starting with 1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266 not found: ID does not exist" containerID="1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.174826 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266"} err="failed to get container status \"1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266\": rpc error: code = NotFound desc = could not find container \"1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266\": container with ID starting with 1910e74022ccf2c3b4fd50c533618514ebd00fbb7a09ef2c901d07a07c626266 not found: ID does not exist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.216528 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.242931 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwrgw"] Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.251697 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwrgw"] Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.278770 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrvwk"] Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.285237 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.285261 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.289520 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrvwk"] Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.296484 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qfsxp"] Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.301121 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qfsxp"] Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.312200 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.327349 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.403809 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.428498 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-utilities\") pod \"9d19aa30-4d50-415b-9d62-b913ad57185e\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.428582 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-catalog-content\") pod \"9d19aa30-4d50-415b-9d62-b913ad57185e\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.428622 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v26x9\" (UniqueName: \"kubernetes.io/projected/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-kube-api-access-v26x9\") pod \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.428680 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-catalog-content\") pod \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.428725 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqfbw\" (UniqueName: \"kubernetes.io/projected/9d19aa30-4d50-415b-9d62-b913ad57185e-kube-api-access-lqfbw\") pod \"9d19aa30-4d50-415b-9d62-b913ad57185e\" (UID: \"9d19aa30-4d50-415b-9d62-b913ad57185e\") " Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.428758 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-utilities\") pod \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\" (UID: \"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8\") " Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.429248 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-utilities" (OuterVolumeSpecName: "utilities") pod "9d19aa30-4d50-415b-9d62-b913ad57185e" (UID: "9d19aa30-4d50-415b-9d62-b913ad57185e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.429495 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-utilities" (OuterVolumeSpecName: "utilities") pod "838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" (UID: "838cf91b-dbd2-4574-8769-4dd1b0dbc9a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.433405 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-kube-api-access-v26x9" (OuterVolumeSpecName: "kube-api-access-v26x9") pod "838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" (UID: "838cf91b-dbd2-4574-8769-4dd1b0dbc9a8"). InnerVolumeSpecName "kube-api-access-v26x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.434974 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d19aa30-4d50-415b-9d62-b913ad57185e-kube-api-access-lqfbw" (OuterVolumeSpecName: "kube-api-access-lqfbw") pod "9d19aa30-4d50-415b-9d62-b913ad57185e" (UID: "9d19aa30-4d50-415b-9d62-b913ad57185e"). InnerVolumeSpecName "kube-api-access-lqfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.491716 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d19aa30-4d50-415b-9d62-b913ad57185e" (UID: "9d19aa30-4d50-415b-9d62-b913ad57185e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.507197 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.526389 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.530152 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.530183 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d19aa30-4d50-415b-9d62-b913ad57185e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.530199 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v26x9\" (UniqueName: \"kubernetes.io/projected/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-kube-api-access-v26x9\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.530233 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqfbw\" (UniqueName: \"kubernetes.io/projected/9d19aa30-4d50-415b-9d62-b913ad57185e-kube-api-access-lqfbw\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.530242 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.536390 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.547245 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" (UID: "838cf91b-dbd2-4574-8769-4dd1b0dbc9a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.573830 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.581745 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.627968 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.631762 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.727551 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.755827 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 09:00:49 crc kubenswrapper[4720]: I0202 09:00:49.758803 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.036471 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.037205 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.037498 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.037831 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.046987 4720 generic.go:334] "Generic (PLEG): container finished" podID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerID="c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380" exitCode=0 Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.047197 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerDied","Data":"c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380"} Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.047269 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkzv8" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.047272 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkzv8" event={"ID":"9d19aa30-4d50-415b-9d62-b913ad57185e","Type":"ContainerDied","Data":"3daf884fd9a3d7144a6aa3840707f8c3fed4901338b90ac18c7df06aca4bb864"} Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.047299 4720 scope.go:117] "RemoveContainer" containerID="c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.049745 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/0.log" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.049785 4720 generic.go:334] "Generic (PLEG): container finished" podID="bd2b2ce4-bf90-4cab-b03c-010e17f20ff5" containerID="918690a5cdb6a07375724efe1106146d2c8d91deb6afa44f835cbd3a8a85731a" exitCode=1 Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.049838 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" event={"ID":"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5","Type":"ContainerDied","Data":"918690a5cdb6a07375724efe1106146d2c8d91deb6afa44f835cbd3a8a85731a"} Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.050017 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" event={"ID":"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5","Type":"ContainerStarted","Data":"eee24528c91a034450ba684c84ea9962510c3beb1a4b69df9a9454f0868e1a0e"} Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.050758 4720 scope.go:117] "RemoveContainer" containerID="918690a5cdb6a07375724efe1106146d2c8d91deb6afa44f835cbd3a8a85731a" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.059634 4720 generic.go:334] "Generic (PLEG): container finished" podID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerID="4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676" exitCode=0 Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.060646 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69l4c" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.066142 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerDied","Data":"4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676"} Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.066277 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69l4c" event={"ID":"838cf91b-dbd2-4574-8769-4dd1b0dbc9a8","Type":"ContainerDied","Data":"4e45f54b2c2e088f64e02cf6f33211a7f5e8b9a85d2ff7c98afa0881558ef6eb"} Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.082313 4720 scope.go:117] "RemoveContainer" containerID="9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.109556 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-69l4c"] Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.115822 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-69l4c"] Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.121531 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkzv8"] Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.126531 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pkzv8"] Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.131360 4720 scope.go:117] "RemoveContainer" containerID="b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.168062 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.173567 4720 scope.go:117] "RemoveContainer" containerID="c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.174288 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380\": container with ID starting with c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380 not found: ID does not exist" containerID="c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.174333 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380"} err="failed to get container status \"c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380\": rpc error: code = NotFound desc = could not find container \"c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380\": container with ID starting with c20bb538e8a7a3cacc7c60355267a3aa9cb0ce708ebd4c29332c6c4b0bf17380 not found: ID does not exist" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.174362 4720 scope.go:117] "RemoveContainer" containerID="9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.174670 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421\": container with ID starting with 9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421 not found: ID does not exist" containerID="9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.174704 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421"} err="failed to get container status \"9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421\": rpc error: code = NotFound desc = could not find container \"9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421\": container with ID starting with 9a465f9dfec2deddde318a7d2044267c11cabebeec0edaf00d9429da8b728421 not found: ID does not exist" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.174727 4720 scope.go:117] "RemoveContainer" containerID="b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.175087 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4\": container with ID starting with b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4 not found: ID does not exist" containerID="b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.175115 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4"} err="failed to get container status \"b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4\": rpc error: code = NotFound desc = could not find container \"b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4\": container with ID starting with b991ecace4c1429201909b860029b743b761bbe34611c64401a07c4856318df4 not found: ID does not exist" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.175131 4720 scope.go:117] "RemoveContainer" containerID="4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.179230 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.199721 4720 scope.go:117] "RemoveContainer" containerID="4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.209261 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d19aa30_4d50_415b_9d62_b913ad57185e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod838cf91b_dbd2_4574_8769_4dd1b0dbc9a8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d19aa30_4d50_415b_9d62_b913ad57185e.slice/crio-3daf884fd9a3d7144a6aa3840707f8c3fed4901338b90ac18c7df06aca4bb864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod838cf91b_dbd2_4574_8769_4dd1b0dbc9a8.slice/crio-4e45f54b2c2e088f64e02cf6f33211a7f5e8b9a85d2ff7c98afa0881558ef6eb\": RecentStats: unable to find data in memory cache]" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.217193 4720 scope.go:117] "RemoveContainer" containerID="c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.234558 4720 scope.go:117] "RemoveContainer" containerID="4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.235168 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676\": container with ID starting with 4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676 not found: ID does not exist" containerID="4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.235205 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676"} err="failed to get container status \"4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676\": rpc error: code = NotFound desc = could not find container \"4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676\": container with ID starting with 4f4e06dc8485078cde68dbe18cd28847fe1ffae722f5cbdc55ad9bbf86dca676 not found: ID does not exist" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.235227 4720 scope.go:117] "RemoveContainer" containerID="4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.235437 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc\": container with ID starting with 4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc not found: ID does not exist" containerID="4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.235454 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc"} err="failed to get container status \"4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc\": rpc error: code = NotFound desc = could not find container \"4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc\": container with ID starting with 4efc1cdd02216fe479765af306d4ef58050d54d15fa09cac89ed7dfaab6cafbc not found: ID does not exist" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.235467 4720 scope.go:117] "RemoveContainer" containerID="c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7" Feb 02 09:00:50 crc kubenswrapper[4720]: E0202 09:00:50.235673 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7\": container with ID starting with c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7 not found: ID does not exist" containerID="c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.235692 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7"} err="failed to get container status \"c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7\": rpc error: code = NotFound desc = could not find container \"c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7\": container with ID starting with c28cf08b06397f344fdfc59f8c28c1debc5687a59bd9394e0253fe0d1c2e96e7 not found: ID does not exist" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.272659 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.287041 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.365171 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.374037 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.390907 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.447658 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.477307 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.521926 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.556705 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.605193 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.660095 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.687517 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.736122 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.757862 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.759994 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.907092 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" path="/var/lib/kubelet/pods/838cf91b-dbd2-4574-8769-4dd1b0dbc9a8/volumes" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.908409 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" path="/var/lib/kubelet/pods/9c2d7533-55a5-4fa2-8c6b-fd441d79a21e/volumes" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.909340 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" path="/var/lib/kubelet/pods/9d19aa30-4d50-415b-9d62-b913ad57185e/volumes" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.911311 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" path="/var/lib/kubelet/pods/d422076d-6f6a-42ea-a820-4aa8399e4a8c/volumes" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.912456 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" path="/var/lib/kubelet/pods/f00de3c0-345f-4bba-a14e-7f7f351b2d23/volumes" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.950724 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 09:00:50 crc kubenswrapper[4720]: I0202 09:00:50.978761 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.014376 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.020958 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.044659 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.053303 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.082637 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/1.log" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.083711 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/0.log" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.083796 4720 generic.go:334] "Generic (PLEG): container finished" podID="bd2b2ce4-bf90-4cab-b03c-010e17f20ff5" containerID="c7ed4f86c6e9f5523a62ffa414e8167885b5b68606f6a647ed1d57d699aee352" exitCode=1 Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.083920 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" event={"ID":"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5","Type":"ContainerDied","Data":"c7ed4f86c6e9f5523a62ffa414e8167885b5b68606f6a647ed1d57d699aee352"} Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.083977 4720 scope.go:117] "RemoveContainer" containerID="918690a5cdb6a07375724efe1106146d2c8d91deb6afa44f835cbd3a8a85731a" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.088328 4720 scope.go:117] "RemoveContainer" containerID="c7ed4f86c6e9f5523a62ffa414e8167885b5b68606f6a647ed1d57d699aee352" Feb 02 09:00:51 crc kubenswrapper[4720]: E0202 09:00:51.089246 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-wm6qb_openshift-marketplace(bd2b2ce4-bf90-4cab-b03c-010e17f20ff5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" podUID="bd2b2ce4-bf90-4cab-b03c-010e17f20ff5" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.232870 4720 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.233287 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29" gracePeriod=5 Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.250524 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.299708 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.362037 4720 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.437482 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.444129 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.471707 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.543292 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.625319 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.726842 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.796999 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.831283 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.913536 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 09:00:51 crc kubenswrapper[4720]: I0202 09:00:51.944340 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.008336 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.010497 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.093677 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.097904 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/1.log" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.098532 4720 scope.go:117] "RemoveContainer" containerID="c7ed4f86c6e9f5523a62ffa414e8167885b5b68606f6a647ed1d57d699aee352" Feb 02 09:00:52 crc kubenswrapper[4720]: E0202 09:00:52.098980 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-wm6qb_openshift-marketplace(bd2b2ce4-bf90-4cab-b03c-010e17f20ff5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" podUID="bd2b2ce4-bf90-4cab-b03c-010e17f20ff5" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.119146 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.173421 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.223752 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.252614 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.269858 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.274447 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.295682 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.533925 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.538705 4720 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.570128 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.571247 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.586441 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.609448 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.917789 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.918039 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 09:00:52 crc kubenswrapper[4720]: I0202 09:00:52.930188 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.040158 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.124127 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.151805 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.209638 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.250289 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.303147 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.378086 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.481981 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.483011 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.502262 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.577117 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.601446 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.701429 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.808862 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.861487 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 09:00:53 crc kubenswrapper[4720]: I0202 09:00:53.898180 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.030455 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.033573 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.083961 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.090388 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.107407 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.212012 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.385683 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.463163 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.538992 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.593677 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.663706 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.677222 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.798901 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.882624 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 09:00:54 crc kubenswrapper[4720]: I0202 09:00:54.905477 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 09:00:55 crc kubenswrapper[4720]: I0202 09:00:55.133005 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 09:00:55 crc kubenswrapper[4720]: I0202 09:00:55.175444 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 09:00:55 crc kubenswrapper[4720]: I0202 09:00:55.530163 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 09:00:55 crc kubenswrapper[4720]: I0202 09:00:55.677302 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.144584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.280619 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.496515 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.817682 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.817748 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940069 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940181 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940268 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940301 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940326 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940321 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940354 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940464 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.940461 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.941107 4720 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.941131 4720 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.941144 4720 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.941159 4720 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.951656 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:00:56 crc kubenswrapper[4720]: I0202 09:00:56.985129 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.042979 4720 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.133156 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.133230 4720 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29" exitCode=137 Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.133286 4720 scope.go:117] "RemoveContainer" containerID="72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29" Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.133419 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.159302 4720 scope.go:117] "RemoveContainer" containerID="72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29" Feb 02 09:00:57 crc kubenswrapper[4720]: E0202 09:00:57.159690 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29\": container with ID starting with 72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29 not found: ID does not exist" containerID="72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29" Feb 02 09:00:57 crc kubenswrapper[4720]: I0202 09:00:57.159730 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29"} err="failed to get container status \"72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29\": rpc error: code = NotFound desc = could not find container \"72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29\": container with ID starting with 72ac204af93e5907a6c5f08c42d826a49f4d11c32fe74c336828a6dd4aaa1a29 not found: ID does not exist" Feb 02 09:00:58 crc kubenswrapper[4720]: I0202 09:00:58.690943 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:58 crc kubenswrapper[4720]: I0202 09:00:58.691394 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:00:58 crc kubenswrapper[4720]: I0202 09:00:58.692299 4720 scope.go:117] "RemoveContainer" containerID="c7ed4f86c6e9f5523a62ffa414e8167885b5b68606f6a647ed1d57d699aee352" Feb 02 09:00:58 crc kubenswrapper[4720]: E0202 09:00:58.692655 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-wm6qb_openshift-marketplace(bd2b2ce4-bf90-4cab-b03c-010e17f20ff5)\"" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" podUID="bd2b2ce4-bf90-4cab-b03c-010e17f20ff5" Feb 02 09:00:58 crc kubenswrapper[4720]: I0202 09:00:58.896730 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 09:00:58 crc kubenswrapper[4720]: I0202 09:00:58.912000 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 09:01:06 crc kubenswrapper[4720]: I0202 09:01:06.628509 4720 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 09:01:08 crc kubenswrapper[4720]: I0202 09:01:08.887136 4720 scope.go:117] "RemoveContainer" containerID="c7ed4f86c6e9f5523a62ffa414e8167885b5b68606f6a647ed1d57d699aee352" Feb 02 09:01:09 crc kubenswrapper[4720]: I0202 09:01:09.209689 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/1.log" Feb 02 09:01:09 crc kubenswrapper[4720]: I0202 09:01:09.210108 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" event={"ID":"bd2b2ce4-bf90-4cab-b03c-010e17f20ff5","Type":"ContainerStarted","Data":"41b75795d3bc5064c89af8ec7f44a26ea870b73620cb8889d8040f0793fe0a7c"} Feb 02 09:01:09 crc kubenswrapper[4720]: I0202 09:01:09.211149 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:01:09 crc kubenswrapper[4720]: I0202 09:01:09.213096 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wm6qb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Feb 02 09:01:09 crc kubenswrapper[4720]: I0202 09:01:09.213140 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" podUID="bd2b2ce4-bf90-4cab-b03c-010e17f20ff5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Feb 02 09:01:09 crc kubenswrapper[4720]: I0202 09:01:09.232901 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" podStartSLOduration=23.232864701 podStartE2EDuration="23.232864701s" podCreationTimestamp="2026-02-02 09:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:01:09.231242005 +0000 UTC m=+303.086867561" watchObservedRunningTime="2026-02-02 09:01:09.232864701 +0000 UTC m=+303.088490257" Feb 02 09:01:10 crc kubenswrapper[4720]: I0202 09:01:10.218461 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wm6qb" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.210193 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6rzkl"] Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.211557 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" podUID="c369d6de-8ee1-4aac-bf97-96d334c023e6" containerName="controller-manager" containerID="cri-o://eb4c3ca81a4ad245e50ca647e694cf3484e2a5f335522d9d5081622e756f6792" gracePeriod=30 Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.311637 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2"] Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.312084 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" podUID="350b56cd-0460-44c4-a898-b4f03938f92a" containerName="route-controller-manager" containerID="cri-o://58eb9a8a2cd9ee8aece0c669524c6da7ad3866dd7349eb2fa439a0d41fa7a1f6" gracePeriod=30 Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.457081 4720 generic.go:334] "Generic (PLEG): container finished" podID="c369d6de-8ee1-4aac-bf97-96d334c023e6" containerID="eb4c3ca81a4ad245e50ca647e694cf3484e2a5f335522d9d5081622e756f6792" exitCode=0 Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.457198 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" event={"ID":"c369d6de-8ee1-4aac-bf97-96d334c023e6","Type":"ContainerDied","Data":"eb4c3ca81a4ad245e50ca647e694cf3484e2a5f335522d9d5081622e756f6792"} Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.460217 4720 generic.go:334] "Generic (PLEG): container finished" podID="350b56cd-0460-44c4-a898-b4f03938f92a" containerID="58eb9a8a2cd9ee8aece0c669524c6da7ad3866dd7349eb2fa439a0d41fa7a1f6" exitCode=0 Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.460245 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" event={"ID":"350b56cd-0460-44c4-a898-b4f03938f92a","Type":"ContainerDied","Data":"58eb9a8a2cd9ee8aece0c669524c6da7ad3866dd7349eb2fa439a0d41fa7a1f6"} Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.597397 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.661549 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.706342 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c369d6de-8ee1-4aac-bf97-96d334c023e6-serving-cert\") pod \"c369d6de-8ee1-4aac-bf97-96d334c023e6\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.706431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-config\") pod \"c369d6de-8ee1-4aac-bf97-96d334c023e6\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.706481 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-proxy-ca-bundles\") pod \"c369d6de-8ee1-4aac-bf97-96d334c023e6\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.706530 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-client-ca\") pod \"c369d6de-8ee1-4aac-bf97-96d334c023e6\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.706602 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46vtk\" (UniqueName: \"kubernetes.io/projected/c369d6de-8ee1-4aac-bf97-96d334c023e6-kube-api-access-46vtk\") pod \"c369d6de-8ee1-4aac-bf97-96d334c023e6\" (UID: \"c369d6de-8ee1-4aac-bf97-96d334c023e6\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.707429 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c369d6de-8ee1-4aac-bf97-96d334c023e6" (UID: "c369d6de-8ee1-4aac-bf97-96d334c023e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.707805 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "c369d6de-8ee1-4aac-bf97-96d334c023e6" (UID: "c369d6de-8ee1-4aac-bf97-96d334c023e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.707840 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-config" (OuterVolumeSpecName: "config") pod "c369d6de-8ee1-4aac-bf97-96d334c023e6" (UID: "c369d6de-8ee1-4aac-bf97-96d334c023e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.714476 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c369d6de-8ee1-4aac-bf97-96d334c023e6-kube-api-access-46vtk" (OuterVolumeSpecName: "kube-api-access-46vtk") pod "c369d6de-8ee1-4aac-bf97-96d334c023e6" (UID: "c369d6de-8ee1-4aac-bf97-96d334c023e6"). InnerVolumeSpecName "kube-api-access-46vtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.715243 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c369d6de-8ee1-4aac-bf97-96d334c023e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c369d6de-8ee1-4aac-bf97-96d334c023e6" (UID: "c369d6de-8ee1-4aac-bf97-96d334c023e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.808156 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-config\") pod \"350b56cd-0460-44c4-a898-b4f03938f92a\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.808531 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-client-ca\") pod \"350b56cd-0460-44c4-a898-b4f03938f92a\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.808566 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxhd7\" (UniqueName: \"kubernetes.io/projected/350b56cd-0460-44c4-a898-b4f03938f92a-kube-api-access-gxhd7\") pod \"350b56cd-0460-44c4-a898-b4f03938f92a\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.808723 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b56cd-0460-44c4-a898-b4f03938f92a-serving-cert\") pod \"350b56cd-0460-44c4-a898-b4f03938f92a\" (UID: \"350b56cd-0460-44c4-a898-b4f03938f92a\") " Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.808968 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46vtk\" (UniqueName: \"kubernetes.io/projected/c369d6de-8ee1-4aac-bf97-96d334c023e6-kube-api-access-46vtk\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.808994 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c369d6de-8ee1-4aac-bf97-96d334c023e6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.809012 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.809029 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.809045 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c369d6de-8ee1-4aac-bf97-96d334c023e6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.809545 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-client-ca" (OuterVolumeSpecName: "client-ca") pod "350b56cd-0460-44c4-a898-b4f03938f92a" (UID: "350b56cd-0460-44c4-a898-b4f03938f92a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.809821 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-config" (OuterVolumeSpecName: "config") pod "350b56cd-0460-44c4-a898-b4f03938f92a" (UID: "350b56cd-0460-44c4-a898-b4f03938f92a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.813345 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350b56cd-0460-44c4-a898-b4f03938f92a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "350b56cd-0460-44c4-a898-b4f03938f92a" (UID: "350b56cd-0460-44c4-a898-b4f03938f92a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.813819 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350b56cd-0460-44c4-a898-b4f03938f92a-kube-api-access-gxhd7" (OuterVolumeSpecName: "kube-api-access-gxhd7") pod "350b56cd-0460-44c4-a898-b4f03938f92a" (UID: "350b56cd-0460-44c4-a898-b4f03938f92a"). InnerVolumeSpecName "kube-api-access-gxhd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.910340 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b56cd-0460-44c4-a898-b4f03938f92a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.910395 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.910415 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/350b56cd-0460-44c4-a898-b4f03938f92a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:42 crc kubenswrapper[4720]: I0202 09:01:42.910435 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxhd7\" (UniqueName: \"kubernetes.io/projected/350b56cd-0460-44c4-a898-b4f03938f92a-kube-api-access-gxhd7\") on node \"crc\" DevicePath \"\"" Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.468775 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.469278 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6rzkl" event={"ID":"c369d6de-8ee1-4aac-bf97-96d334c023e6","Type":"ContainerDied","Data":"225656e48f7ecd3289ff7f79a8252b487d51f3c2bf73b45f5090bc2ce4c7ed4b"} Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.469393 4720 scope.go:117] "RemoveContainer" containerID="eb4c3ca81a4ad245e50ca647e694cf3484e2a5f335522d9d5081622e756f6792" Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.471211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" event={"ID":"350b56cd-0460-44c4-a898-b4f03938f92a","Type":"ContainerDied","Data":"9e4f85e7b70aaa3c2bb998a7a2986072f7ad48c54ffbacaf7c44291c2c99f5fb"} Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.471305 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2" Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.490797 4720 scope.go:117] "RemoveContainer" containerID="58eb9a8a2cd9ee8aece0c669524c6da7ad3866dd7349eb2fa439a0d41fa7a1f6" Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.494837 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2"] Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.502471 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jrbg2"] Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.507868 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6rzkl"] Feb 02 09:01:43 crc kubenswrapper[4720]: I0202 09:01:43.510950 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6rzkl"] Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.236484 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf"] Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.237404 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.237634 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.237833 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.238071 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.238265 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.238477 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.238690 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.238924 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.239137 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.239324 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.239523 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.239715 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.239943 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.240181 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.240391 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.240571 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.240752 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.240956 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241159 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350b56cd-0460-44c4-a898-b4f03938f92a" containerName="route-controller-manager" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241192 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="350b56cd-0460-44c4-a898-b4f03938f92a" containerName="route-controller-manager" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241215 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" containerName="marketplace-operator" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241232 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" containerName="marketplace-operator" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241253 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241269 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241322 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241341 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="extract-utilities" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241375 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241391 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="extract-content" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241415 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c369d6de-8ee1-4aac-bf97-96d334c023e6" containerName="controller-manager" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241432 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c369d6de-8ee1-4aac-bf97-96d334c023e6" containerName="controller-manager" Feb 02 09:01:44 crc kubenswrapper[4720]: E0202 09:01:44.241452 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241467 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241685 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="350b56cd-0460-44c4-a898-b4f03938f92a" containerName="route-controller-manager" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241715 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00de3c0-345f-4bba-a14e-7f7f351b2d23" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241741 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2d7533-55a5-4fa2-8c6b-fd441d79a21e" containerName="marketplace-operator" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241767 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c369d6de-8ee1-4aac-bf97-96d334c023e6" containerName="controller-manager" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241789 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d19aa30-4d50-415b-9d62-b913ad57185e" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241812 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="838cf91b-dbd2-4574-8769-4dd1b0dbc9a8" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241835 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d422076d-6f6a-42ea-a820-4aa8399e4a8c" containerName="registry-server" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.241857 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.244450 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l"] Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.244676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.246670 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.249620 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.250366 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.251394 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.252110 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.252756 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.255309 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.255366 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.256476 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.256928 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.258069 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.260468 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.263112 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.281828 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf"] Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.288179 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.299025 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l"] Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.332812 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-client-ca\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.333096 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-config\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434373 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-proxy-ca-bundles\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434458 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-client-ca\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-client-ca\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434554 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07178dd-e47a-4bfe-8e53-3f834fa5855d-serving-cert\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434603 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-config\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434646 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pv8x\" (UniqueName: \"kubernetes.io/projected/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-kube-api-access-6pv8x\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434681 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-config\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434745 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t48lt\" (UniqueName: \"kubernetes.io/projected/c07178dd-e47a-4bfe-8e53-3f834fa5855d-kube-api-access-t48lt\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.434785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-serving-cert\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.435821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-client-ca\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.437637 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-config\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pv8x\" (UniqueName: \"kubernetes.io/projected/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-kube-api-access-6pv8x\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535510 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-config\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535579 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t48lt\" (UniqueName: \"kubernetes.io/projected/c07178dd-e47a-4bfe-8e53-3f834fa5855d-kube-api-access-t48lt\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535618 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-serving-cert\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535699 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-proxy-ca-bundles\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535739 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-client-ca\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.535779 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07178dd-e47a-4bfe-8e53-3f834fa5855d-serving-cert\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.537830 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-config\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.538761 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-client-ca\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.538768 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c07178dd-e47a-4bfe-8e53-3f834fa5855d-proxy-ca-bundles\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.545131 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-serving-cert\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.551980 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07178dd-e47a-4bfe-8e53-3f834fa5855d-serving-cert\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.565589 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pv8x\" (UniqueName: \"kubernetes.io/projected/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-kube-api-access-6pv8x\") pod \"route-controller-manager-57fbf49df8-vzlrf\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.568344 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t48lt\" (UniqueName: \"kubernetes.io/projected/c07178dd-e47a-4bfe-8e53-3f834fa5855d-kube-api-access-t48lt\") pod \"controller-manager-5648cfc4d7-jjq7l\" (UID: \"c07178dd-e47a-4bfe-8e53-3f834fa5855d\") " pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.609857 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.615342 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.863368 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l"] Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.904363 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350b56cd-0460-44c4-a898-b4f03938f92a" path="/var/lib/kubelet/pods/350b56cd-0460-44c4-a898-b4f03938f92a/volumes" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.904948 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c369d6de-8ee1-4aac-bf97-96d334c023e6" path="/var/lib/kubelet/pods/c369d6de-8ee1-4aac-bf97-96d334c023e6/volumes" Feb 02 09:01:44 crc kubenswrapper[4720]: I0202 09:01:44.915194 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf"] Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.492999 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" event={"ID":"c07178dd-e47a-4bfe-8e53-3f834fa5855d","Type":"ContainerStarted","Data":"78dd9568488da317043bd9caef991b68a11e47672f7a9441fcc3e2deb0360c9e"} Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.493055 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" event={"ID":"c07178dd-e47a-4bfe-8e53-3f834fa5855d","Type":"ContainerStarted","Data":"ac45d69e4a3d943272d9eef55f2e1ca9d2f4d09051a7b95605c0bbc2636efe4c"} Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.493591 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.497022 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" event={"ID":"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec","Type":"ContainerStarted","Data":"3d9bc388423eadafec6e2ca632985ae414e587d39559a996f1a57d6380e878d7"} Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.497081 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" event={"ID":"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec","Type":"ContainerStarted","Data":"1b6ce3cc74952a6ab4424fd9fa33e956481575c5eafd43a0f89102f320179127"} Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.497257 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.497369 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.504408 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.513993 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5648cfc4d7-jjq7l" podStartSLOduration=3.513974287 podStartE2EDuration="3.513974287s" podCreationTimestamp="2026-02-02 09:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:01:45.512052251 +0000 UTC m=+339.367677807" watchObservedRunningTime="2026-02-02 09:01:45.513974287 +0000 UTC m=+339.369599843" Feb 02 09:01:45 crc kubenswrapper[4720]: I0202 09:01:45.536809 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" podStartSLOduration=3.5367888880000002 podStartE2EDuration="3.536788888s" podCreationTimestamp="2026-02-02 09:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:01:45.534139594 +0000 UTC m=+339.389765160" watchObservedRunningTime="2026-02-02 09:01:45.536788888 +0000 UTC m=+339.392414444" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.194926 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf"] Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.196065 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" podUID="ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" containerName="route-controller-manager" containerID="cri-o://3d9bc388423eadafec6e2ca632985ae414e587d39559a996f1a57d6380e878d7" gracePeriod=30 Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.646024 4720 generic.go:334] "Generic (PLEG): container finished" podID="ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" containerID="3d9bc388423eadafec6e2ca632985ae414e587d39559a996f1a57d6380e878d7" exitCode=0 Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.646116 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" event={"ID":"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec","Type":"ContainerDied","Data":"3d9bc388423eadafec6e2ca632985ae414e587d39559a996f1a57d6380e878d7"} Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.646685 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" event={"ID":"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec","Type":"ContainerDied","Data":"1b6ce3cc74952a6ab4424fd9fa33e956481575c5eafd43a0f89102f320179127"} Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.646712 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b6ce3cc74952a6ab4424fd9fa33e956481575c5eafd43a0f89102f320179127" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.649488 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.699473 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-serving-cert\") pod \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.699563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pv8x\" (UniqueName: \"kubernetes.io/projected/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-kube-api-access-6pv8x\") pod \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.699600 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-client-ca\") pod \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.699710 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-config\") pod \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\" (UID: \"ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec\") " Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.700472 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" (UID: "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.700510 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-config" (OuterVolumeSpecName: "config") pod "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" (UID: "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.717236 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" (UID: "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.717252 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-kube-api-access-6pv8x" (OuterVolumeSpecName: "kube-api-access-6pv8x") pod "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" (UID: "ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec"). InnerVolumeSpecName "kube-api-access-6pv8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.801366 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.801405 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pv8x\" (UniqueName: \"kubernetes.io/projected/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-kube-api-access-6pv8x\") on node \"crc\" DevicePath \"\"" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.801419 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 09:02:02 crc kubenswrapper[4720]: I0202 09:02:02.801432 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.245697 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq"] Feb 02 09:02:03 crc kubenswrapper[4720]: E0202 09:02:03.246258 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" containerName="route-controller-manager" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.246280 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" containerName="route-controller-manager" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.246460 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" containerName="route-controller-manager" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.247187 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.269905 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq"] Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.309526 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550e3f04-29ac-406f-a14a-c5f766d4a5f0-serving-cert\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.309629 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550e3f04-29ac-406f-a14a-c5f766d4a5f0-config\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.309684 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skddl\" (UniqueName: \"kubernetes.io/projected/550e3f04-29ac-406f-a14a-c5f766d4a5f0-kube-api-access-skddl\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.309728 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550e3f04-29ac-406f-a14a-c5f766d4a5f0-client-ca\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.411714 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550e3f04-29ac-406f-a14a-c5f766d4a5f0-serving-cert\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.411816 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550e3f04-29ac-406f-a14a-c5f766d4a5f0-config\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.411873 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skddl\" (UniqueName: \"kubernetes.io/projected/550e3f04-29ac-406f-a14a-c5f766d4a5f0-kube-api-access-skddl\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.411952 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550e3f04-29ac-406f-a14a-c5f766d4a5f0-client-ca\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.413660 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/550e3f04-29ac-406f-a14a-c5f766d4a5f0-client-ca\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.414314 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550e3f04-29ac-406f-a14a-c5f766d4a5f0-config\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.423165 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550e3f04-29ac-406f-a14a-c5f766d4a5f0-serving-cert\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.444448 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skddl\" (UniqueName: \"kubernetes.io/projected/550e3f04-29ac-406f-a14a-c5f766d4a5f0-kube-api-access-skddl\") pod \"route-controller-manager-7dfcdcc44b-c92hq\" (UID: \"550e3f04-29ac-406f-a14a-c5f766d4a5f0\") " pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.574310 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.655752 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf" Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.688220 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf"] Feb 02 09:02:03 crc kubenswrapper[4720]: I0202 09:02:03.694592 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fbf49df8-vzlrf"] Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.091036 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq"] Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.662004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" event={"ID":"550e3f04-29ac-406f-a14a-c5f766d4a5f0","Type":"ContainerStarted","Data":"adf877f4e61dcb9493ea867335e46b0acedfa7c1ccf5ffb98eec1d69a50d9797"} Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.662421 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.662436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" event={"ID":"550e3f04-29ac-406f-a14a-c5f766d4a5f0","Type":"ContainerStarted","Data":"8f18c2352275121a216e447190b4deb8b04c4841301da67e95b27fcd88305ab5"} Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.695351 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" podStartSLOduration=2.695326802 podStartE2EDuration="2.695326802s" podCreationTimestamp="2026-02-02 09:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:02:04.693321784 +0000 UTC m=+358.548947340" watchObservedRunningTime="2026-02-02 09:02:04.695326802 +0000 UTC m=+358.550952398" Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.765474 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dfcdcc44b-c92hq" Feb 02 09:02:04 crc kubenswrapper[4720]: I0202 09:02:04.894478 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec" path="/var/lib/kubelet/pods/ec0bc98b-3f25-4339-8ba2-ca8cfe3086ec/volumes" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.113161 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zscxq"] Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.114358 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.135763 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zscxq"] Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296266 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ccb6188-5e23-42a6-b593-96e0bc54696f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296545 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-registry-tls\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296655 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ccb6188-5e23-42a6-b593-96e0bc54696f-registry-certificates\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ccb6188-5e23-42a6-b593-96e0bc54696f-trusted-ca\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296739 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqtp\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-kube-api-access-2cqtp\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ccb6188-5e23-42a6-b593-96e0bc54696f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.296978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.297042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-bound-sa-token\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.333931 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ccb6188-5e23-42a6-b593-96e0bc54696f-trusted-ca\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398237 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqtp\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-kube-api-access-2cqtp\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398285 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ccb6188-5e23-42a6-b593-96e0bc54696f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398353 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-bound-sa-token\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ccb6188-5e23-42a6-b593-96e0bc54696f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-registry-tls\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.398541 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ccb6188-5e23-42a6-b593-96e0bc54696f-registry-certificates\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.400767 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ccb6188-5e23-42a6-b593-96e0bc54696f-registry-certificates\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.405450 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ccb6188-5e23-42a6-b593-96e0bc54696f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.412613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ccb6188-5e23-42a6-b593-96e0bc54696f-trusted-ca\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.412657 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-registry-tls\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.418646 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ccb6188-5e23-42a6-b593-96e0bc54696f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.429163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqtp\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-kube-api-access-2cqtp\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.433908 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ccb6188-5e23-42a6-b593-96e0bc54696f-bound-sa-token\") pod \"image-registry-66df7c8f76-zscxq\" (UID: \"9ccb6188-5e23-42a6-b593-96e0bc54696f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.443943 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:14 crc kubenswrapper[4720]: I0202 09:02:14.941358 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zscxq"] Feb 02 09:02:15 crc kubenswrapper[4720]: I0202 09:02:15.756559 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" event={"ID":"9ccb6188-5e23-42a6-b593-96e0bc54696f","Type":"ContainerStarted","Data":"2de35d6b8a24f4247e387bdc2aec060ad24913444af6cd20d078d645f862f6d0"} Feb 02 09:02:15 crc kubenswrapper[4720]: I0202 09:02:15.757755 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:15 crc kubenswrapper[4720]: I0202 09:02:15.757788 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" event={"ID":"9ccb6188-5e23-42a6-b593-96e0bc54696f","Type":"ContainerStarted","Data":"5df6a8fc9502954e0764601b0282912b07b080eee0151981b4d50b52fb4d5ff6"} Feb 02 09:02:15 crc kubenswrapper[4720]: I0202 09:02:15.781980 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" podStartSLOduration=1.781954789 podStartE2EDuration="1.781954789s" podCreationTimestamp="2026-02-02 09:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:02:15.776716224 +0000 UTC m=+369.632341810" watchObservedRunningTime="2026-02-02 09:02:15.781954789 +0000 UTC m=+369.637580335" Feb 02 09:02:17 crc kubenswrapper[4720]: I0202 09:02:17.901981 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:02:17 crc kubenswrapper[4720]: I0202 09:02:17.902422 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.536466 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6hp7"] Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.538271 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.540866 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.560823 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6hp7"] Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.632778 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ddt\" (UniqueName: \"kubernetes.io/projected/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-kube-api-access-f9ddt\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.632920 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-utilities\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.633213 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-catalog-content\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.733258 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b84cw"] Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.734343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-utilities\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.734417 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-catalog-content\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.734467 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ddt\" (UniqueName: \"kubernetes.io/projected/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-kube-api-access-f9ddt\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.734821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-utilities\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.734852 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-catalog-content\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.735653 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.738098 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.751981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b84cw"] Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.759566 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ddt\" (UniqueName: \"kubernetes.io/projected/55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81-kube-api-access-f9ddt\") pod \"community-operators-h6hp7\" (UID: \"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81\") " pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.835272 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be069028-7bae-40c2-a12f-780dbf9c4ccc-utilities\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.835325 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr5hm\" (UniqueName: \"kubernetes.io/projected/be069028-7bae-40c2-a12f-780dbf9c4ccc-kube-api-access-xr5hm\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.835364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be069028-7bae-40c2-a12f-780dbf9c4ccc-catalog-content\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.899421 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.937233 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be069028-7bae-40c2-a12f-780dbf9c4ccc-utilities\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.937336 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr5hm\" (UniqueName: \"kubernetes.io/projected/be069028-7bae-40c2-a12f-780dbf9c4ccc-kube-api-access-xr5hm\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.937415 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be069028-7bae-40c2-a12f-780dbf9c4ccc-catalog-content\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.938105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be069028-7bae-40c2-a12f-780dbf9c4ccc-utilities\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.938242 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be069028-7bae-40c2-a12f-780dbf9c4ccc-catalog-content\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:28 crc kubenswrapper[4720]: I0202 09:02:28.980275 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr5hm\" (UniqueName: \"kubernetes.io/projected/be069028-7bae-40c2-a12f-780dbf9c4ccc-kube-api-access-xr5hm\") pod \"certified-operators-b84cw\" (UID: \"be069028-7bae-40c2-a12f-780dbf9c4ccc\") " pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.062043 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.307966 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6hp7"] Feb 02 09:02:29 crc kubenswrapper[4720]: W0202 09:02:29.316414 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55269fe7_fb5f_4d5d_b9f3_b9ddf189ae81.slice/crio-a8158996cc2d2c8e9e741dd8faaad799125b57c46658c20a87254d5069216760 WatchSource:0}: Error finding container a8158996cc2d2c8e9e741dd8faaad799125b57c46658c20a87254d5069216760: Status 404 returned error can't find the container with id a8158996cc2d2c8e9e741dd8faaad799125b57c46658c20a87254d5069216760 Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.454697 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b84cw"] Feb 02 09:02:29 crc kubenswrapper[4720]: W0202 09:02:29.499438 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe069028_7bae_40c2_a12f_780dbf9c4ccc.slice/crio-b6e769b9b8892cb83daeb909eb5d1ae09b8f175c669048622dff879355bf32aa WatchSource:0}: Error finding container b6e769b9b8892cb83daeb909eb5d1ae09b8f175c669048622dff879355bf32aa: Status 404 returned error can't find the container with id b6e769b9b8892cb83daeb909eb5d1ae09b8f175c669048622dff879355bf32aa Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.843656 4720 generic.go:334] "Generic (PLEG): container finished" podID="be069028-7bae-40c2-a12f-780dbf9c4ccc" containerID="e2584ecfaf87a971a720373d158c77273fcc63d00ce719ae6a83d7b8f8573590" exitCode=0 Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.843730 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b84cw" event={"ID":"be069028-7bae-40c2-a12f-780dbf9c4ccc","Type":"ContainerDied","Data":"e2584ecfaf87a971a720373d158c77273fcc63d00ce719ae6a83d7b8f8573590"} Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.844058 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b84cw" event={"ID":"be069028-7bae-40c2-a12f-780dbf9c4ccc","Type":"ContainerStarted","Data":"b6e769b9b8892cb83daeb909eb5d1ae09b8f175c669048622dff879355bf32aa"} Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.846326 4720 generic.go:334] "Generic (PLEG): container finished" podID="55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81" containerID="73c6abe85624074a7e1262e63f186192dca0d3ae776097727e984b09d51d44fe" exitCode=0 Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.846376 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6hp7" event={"ID":"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81","Type":"ContainerDied","Data":"73c6abe85624074a7e1262e63f186192dca0d3ae776097727e984b09d51d44fe"} Feb 02 09:02:29 crc kubenswrapper[4720]: I0202 09:02:29.846400 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6hp7" event={"ID":"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81","Type":"ContainerStarted","Data":"a8158996cc2d2c8e9e741dd8faaad799125b57c46658c20a87254d5069216760"} Feb 02 09:02:30 crc kubenswrapper[4720]: I0202 09:02:30.854034 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6hp7" event={"ID":"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81","Type":"ContainerStarted","Data":"aaf269a4efd1b3aa42aa8e69ef3d6b02639380a71d9d465566b942e419a88577"} Feb 02 09:02:30 crc kubenswrapper[4720]: I0202 09:02:30.860294 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b84cw" event={"ID":"be069028-7bae-40c2-a12f-780dbf9c4ccc","Type":"ContainerStarted","Data":"f402f511f1210d5b9024a7f54ca42240524a0b59eaf0436a592d28fd5f7a216f"} Feb 02 09:02:30 crc kubenswrapper[4720]: I0202 09:02:30.931258 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r87rg"] Feb 02 09:02:30 crc kubenswrapper[4720]: I0202 09:02:30.934078 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:30 crc kubenswrapper[4720]: I0202 09:02:30.937323 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 09:02:30 crc kubenswrapper[4720]: I0202 09:02:30.942173 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r87rg"] Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.095156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0502035c-9982-417c-94f2-73046cbfbbbc-utilities\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.095290 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz8fv\" (UniqueName: \"kubernetes.io/projected/0502035c-9982-417c-94f2-73046cbfbbbc-kube-api-access-lz8fv\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.095360 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0502035c-9982-417c-94f2-73046cbfbbbc-catalog-content\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.128970 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmgrw"] Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.132070 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.142611 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.153117 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmgrw"] Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.196824 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0502035c-9982-417c-94f2-73046cbfbbbc-utilities\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.196906 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8fv\" (UniqueName: \"kubernetes.io/projected/0502035c-9982-417c-94f2-73046cbfbbbc-kube-api-access-lz8fv\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.196951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0502035c-9982-417c-94f2-73046cbfbbbc-catalog-content\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.197487 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0502035c-9982-417c-94f2-73046cbfbbbc-catalog-content\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.197584 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0502035c-9982-417c-94f2-73046cbfbbbc-utilities\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.215410 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8fv\" (UniqueName: \"kubernetes.io/projected/0502035c-9982-417c-94f2-73046cbfbbbc-kube-api-access-lz8fv\") pod \"redhat-marketplace-r87rg\" (UID: \"0502035c-9982-417c-94f2-73046cbfbbbc\") " pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.297950 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f409f361-d210-43c7-a209-3e2cf6678eb1-catalog-content\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.298046 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgvwz\" (UniqueName: \"kubernetes.io/projected/f409f361-d210-43c7-a209-3e2cf6678eb1-kube-api-access-bgvwz\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.298077 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f409f361-d210-43c7-a209-3e2cf6678eb1-utilities\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.399075 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f409f361-d210-43c7-a209-3e2cf6678eb1-utilities\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.399205 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f409f361-d210-43c7-a209-3e2cf6678eb1-catalog-content\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.399337 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgvwz\" (UniqueName: \"kubernetes.io/projected/f409f361-d210-43c7-a209-3e2cf6678eb1-kube-api-access-bgvwz\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.400076 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f409f361-d210-43c7-a209-3e2cf6678eb1-utilities\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.400120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f409f361-d210-43c7-a209-3e2cf6678eb1-catalog-content\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.417480 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgvwz\" (UniqueName: \"kubernetes.io/projected/f409f361-d210-43c7-a209-3e2cf6678eb1-kube-api-access-bgvwz\") pod \"redhat-operators-rmgrw\" (UID: \"f409f361-d210-43c7-a209-3e2cf6678eb1\") " pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.452376 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.461972 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.763070 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmgrw"] Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.870160 4720 generic.go:334] "Generic (PLEG): container finished" podID="be069028-7bae-40c2-a12f-780dbf9c4ccc" containerID="f402f511f1210d5b9024a7f54ca42240524a0b59eaf0436a592d28fd5f7a216f" exitCode=0 Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.870245 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b84cw" event={"ID":"be069028-7bae-40c2-a12f-780dbf9c4ccc","Type":"ContainerDied","Data":"f402f511f1210d5b9024a7f54ca42240524a0b59eaf0436a592d28fd5f7a216f"} Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.872540 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgrw" event={"ID":"f409f361-d210-43c7-a209-3e2cf6678eb1","Type":"ContainerStarted","Data":"6eea3046f4fce81cee7e2b624a84c762552ad62c5e6c7f906408c935ed92a1ec"} Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.876352 4720 generic.go:334] "Generic (PLEG): container finished" podID="55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81" containerID="aaf269a4efd1b3aa42aa8e69ef3d6b02639380a71d9d465566b942e419a88577" exitCode=0 Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.876407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6hp7" event={"ID":"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81","Type":"ContainerDied","Data":"aaf269a4efd1b3aa42aa8e69ef3d6b02639380a71d9d465566b942e419a88577"} Feb 02 09:02:31 crc kubenswrapper[4720]: I0202 09:02:31.940154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r87rg"] Feb 02 09:02:31 crc kubenswrapper[4720]: W0202 09:02:31.944254 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0502035c_9982_417c_94f2_73046cbfbbbc.slice/crio-5651d70aba2a1b8b7df9fcfd74ddb23d4a64bf684bbb34db50f0b029bddcb391 WatchSource:0}: Error finding container 5651d70aba2a1b8b7df9fcfd74ddb23d4a64bf684bbb34db50f0b029bddcb391: Status 404 returned error can't find the container with id 5651d70aba2a1b8b7df9fcfd74ddb23d4a64bf684bbb34db50f0b029bddcb391 Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.885459 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b84cw" event={"ID":"be069028-7bae-40c2-a12f-780dbf9c4ccc","Type":"ContainerStarted","Data":"67a936091535ff38ce4f99a7e71ada36238e274d0810734a170524c51b44cc8d"} Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.893102 4720 generic.go:334] "Generic (PLEG): container finished" podID="f409f361-d210-43c7-a209-3e2cf6678eb1" containerID="5ddc4fb92a49910a1c1a5ad981de654f1a9a65a4694d7b61a48e8dc5e903a746" exitCode=0 Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.896075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgrw" event={"ID":"f409f361-d210-43c7-a209-3e2cf6678eb1","Type":"ContainerDied","Data":"5ddc4fb92a49910a1c1a5ad981de654f1a9a65a4694d7b61a48e8dc5e903a746"} Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.897322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6hp7" event={"ID":"55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81","Type":"ContainerStarted","Data":"956f0ee85f921c22969f1a3b3b43ff72ae81cfaea3e0864ac6186fbb30606487"} Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.899902 4720 generic.go:334] "Generic (PLEG): container finished" podID="0502035c-9982-417c-94f2-73046cbfbbbc" containerID="26009afebb1b73830a1017192990cfeaf64f77cbec0afbbc926777b3ed6dba1c" exitCode=0 Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.899939 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r87rg" event={"ID":"0502035c-9982-417c-94f2-73046cbfbbbc","Type":"ContainerDied","Data":"26009afebb1b73830a1017192990cfeaf64f77cbec0afbbc926777b3ed6dba1c"} Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.899963 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r87rg" event={"ID":"0502035c-9982-417c-94f2-73046cbfbbbc","Type":"ContainerStarted","Data":"5651d70aba2a1b8b7df9fcfd74ddb23d4a64bf684bbb34db50f0b029bddcb391"} Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.924196 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b84cw" podStartSLOduration=2.429436439 podStartE2EDuration="4.924174354s" podCreationTimestamp="2026-02-02 09:02:28 +0000 UTC" firstStartedPulling="2026-02-02 09:02:29.845593001 +0000 UTC m=+383.701218557" lastFinishedPulling="2026-02-02 09:02:32.340330876 +0000 UTC m=+386.195956472" observedRunningTime="2026-02-02 09:02:32.922846972 +0000 UTC m=+386.778472558" watchObservedRunningTime="2026-02-02 09:02:32.924174354 +0000 UTC m=+386.779799920" Feb 02 09:02:32 crc kubenswrapper[4720]: I0202 09:02:32.948711 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6hp7" podStartSLOduration=2.305376317 podStartE2EDuration="4.948671714s" podCreationTimestamp="2026-02-02 09:02:28 +0000 UTC" firstStartedPulling="2026-02-02 09:02:29.849746321 +0000 UTC m=+383.705371897" lastFinishedPulling="2026-02-02 09:02:32.493041708 +0000 UTC m=+386.348667294" observedRunningTime="2026-02-02 09:02:32.937872364 +0000 UTC m=+386.793497930" watchObservedRunningTime="2026-02-02 09:02:32.948671714 +0000 UTC m=+386.804297280" Feb 02 09:02:33 crc kubenswrapper[4720]: I0202 09:02:33.909527 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgrw" event={"ID":"f409f361-d210-43c7-a209-3e2cf6678eb1","Type":"ContainerStarted","Data":"75dd19dfa6d374622fa975aa00a781c71d89950a76ae9b99c95b1d026a1e0ef5"} Feb 02 09:02:33 crc kubenswrapper[4720]: I0202 09:02:33.911458 4720 generic.go:334] "Generic (PLEG): container finished" podID="0502035c-9982-417c-94f2-73046cbfbbbc" containerID="75f3c19391928267584f38e223fb9b3ca42f3daa02f7cd29e197ea243d2ed94f" exitCode=0 Feb 02 09:02:33 crc kubenswrapper[4720]: I0202 09:02:33.912480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r87rg" event={"ID":"0502035c-9982-417c-94f2-73046cbfbbbc","Type":"ContainerDied","Data":"75f3c19391928267584f38e223fb9b3ca42f3daa02f7cd29e197ea243d2ed94f"} Feb 02 09:02:34 crc kubenswrapper[4720]: I0202 09:02:34.449493 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zscxq" Feb 02 09:02:34 crc kubenswrapper[4720]: I0202 09:02:34.528014 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bqg82"] Feb 02 09:02:34 crc kubenswrapper[4720]: I0202 09:02:34.919010 4720 generic.go:334] "Generic (PLEG): container finished" podID="f409f361-d210-43c7-a209-3e2cf6678eb1" containerID="75dd19dfa6d374622fa975aa00a781c71d89950a76ae9b99c95b1d026a1e0ef5" exitCode=0 Feb 02 09:02:34 crc kubenswrapper[4720]: I0202 09:02:34.919277 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgrw" event={"ID":"f409f361-d210-43c7-a209-3e2cf6678eb1","Type":"ContainerDied","Data":"75dd19dfa6d374622fa975aa00a781c71d89950a76ae9b99c95b1d026a1e0ef5"} Feb 02 09:02:34 crc kubenswrapper[4720]: I0202 09:02:34.923739 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r87rg" event={"ID":"0502035c-9982-417c-94f2-73046cbfbbbc","Type":"ContainerStarted","Data":"da6713c70b94214da04609226ef187ed48e99dce6c29cae2db9c3be75c40ad06"} Feb 02 09:02:34 crc kubenswrapper[4720]: I0202 09:02:34.973088 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r87rg" podStartSLOduration=3.2914667 podStartE2EDuration="4.973067598s" podCreationTimestamp="2026-02-02 09:02:30 +0000 UTC" firstStartedPulling="2026-02-02 09:02:32.902079071 +0000 UTC m=+386.757704647" lastFinishedPulling="2026-02-02 09:02:34.583679989 +0000 UTC m=+388.439305545" observedRunningTime="2026-02-02 09:02:34.971051319 +0000 UTC m=+388.826676875" watchObservedRunningTime="2026-02-02 09:02:34.973067598 +0000 UTC m=+388.828693174" Feb 02 09:02:35 crc kubenswrapper[4720]: I0202 09:02:35.931236 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgrw" event={"ID":"f409f361-d210-43c7-a209-3e2cf6678eb1","Type":"ContainerStarted","Data":"c83305974251374c2b19022e6944776326354ebe633cf4033c78a9060849da93"} Feb 02 09:02:35 crc kubenswrapper[4720]: I0202 09:02:35.954165 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmgrw" podStartSLOduration=2.543926638 podStartE2EDuration="4.954140315s" podCreationTimestamp="2026-02-02 09:02:31 +0000 UTC" firstStartedPulling="2026-02-02 09:02:32.897013829 +0000 UTC m=+386.752639395" lastFinishedPulling="2026-02-02 09:02:35.307227506 +0000 UTC m=+389.162853072" observedRunningTime="2026-02-02 09:02:35.952361229 +0000 UTC m=+389.807986805" watchObservedRunningTime="2026-02-02 09:02:35.954140315 +0000 UTC m=+389.809765901" Feb 02 09:02:38 crc kubenswrapper[4720]: I0202 09:02:38.899758 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:38 crc kubenswrapper[4720]: I0202 09:02:38.900978 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:38 crc kubenswrapper[4720]: I0202 09:02:38.977170 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:39 crc kubenswrapper[4720]: I0202 09:02:39.062775 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:39 crc kubenswrapper[4720]: I0202 09:02:39.062848 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:39 crc kubenswrapper[4720]: I0202 09:02:39.132216 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:40 crc kubenswrapper[4720]: I0202 09:02:40.017122 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6hp7" Feb 02 09:02:40 crc kubenswrapper[4720]: I0202 09:02:40.041663 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b84cw" Feb 02 09:02:41 crc kubenswrapper[4720]: I0202 09:02:41.452729 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:41 crc kubenswrapper[4720]: I0202 09:02:41.452809 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:41 crc kubenswrapper[4720]: I0202 09:02:41.463270 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:41 crc kubenswrapper[4720]: I0202 09:02:41.463368 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:41 crc kubenswrapper[4720]: I0202 09:02:41.508468 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:42 crc kubenswrapper[4720]: I0202 09:02:42.027971 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r87rg" Feb 02 09:02:42 crc kubenswrapper[4720]: I0202 09:02:42.536229 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmgrw" podUID="f409f361-d210-43c7-a209-3e2cf6678eb1" containerName="registry-server" probeResult="failure" output=< Feb 02 09:02:42 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:02:42 crc kubenswrapper[4720]: > Feb 02 09:02:47 crc kubenswrapper[4720]: I0202 09:02:47.902725 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:02:47 crc kubenswrapper[4720]: I0202 09:02:47.904097 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:02:51 crc kubenswrapper[4720]: I0202 09:02:51.535936 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:51 crc kubenswrapper[4720]: I0202 09:02:51.607554 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmgrw" Feb 02 09:02:59 crc kubenswrapper[4720]: I0202 09:02:59.605178 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" podUID="d4f92bb0-73fe-45d5-870b-a63931a4ef12" containerName="registry" containerID="cri-o://aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05" gracePeriod=30 Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.059421 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.104237 4720 generic.go:334] "Generic (PLEG): container finished" podID="d4f92bb0-73fe-45d5-870b-a63931a4ef12" containerID="aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05" exitCode=0 Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.104295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" event={"ID":"d4f92bb0-73fe-45d5-870b-a63931a4ef12","Type":"ContainerDied","Data":"aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05"} Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.104328 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" event={"ID":"d4f92bb0-73fe-45d5-870b-a63931a4ef12","Type":"ContainerDied","Data":"3493fdebb5dddae98bf1c3507acf3ce4458a75b62da549873301e3d3963e60b0"} Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.104349 4720 scope.go:117] "RemoveContainer" containerID="aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.104469 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bqg82" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.124445 4720 scope.go:117] "RemoveContainer" containerID="aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05" Feb 02 09:03:00 crc kubenswrapper[4720]: E0202 09:03:00.125023 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05\": container with ID starting with aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05 not found: ID does not exist" containerID="aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.125066 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05"} err="failed to get container status \"aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05\": rpc error: code = NotFound desc = could not find container \"aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05\": container with ID starting with aca4c3274f0b5396aaef466e2a56c6e81569ade74c8845b6c499224954054c05 not found: ID does not exist" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.258334 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-trusted-ca\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.258431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-bound-sa-token\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.258545 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4f92bb0-73fe-45d5-870b-a63931a4ef12-installation-pull-secrets\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.258645 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4f92bb0-73fe-45d5-870b-a63931a4ef12-ca-trust-extracted\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.258695 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-tls\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.258733 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-certificates\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.259000 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.259073 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75kx4\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-kube-api-access-75kx4\") pod \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\" (UID: \"d4f92bb0-73fe-45d5-870b-a63931a4ef12\") " Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.259988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.260598 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.270214 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f92bb0-73fe-45d5-870b-a63931a4ef12-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.274688 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.276298 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.285088 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.285147 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-kube-api-access-75kx4" (OuterVolumeSpecName: "kube-api-access-75kx4") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "kube-api-access-75kx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.297044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f92bb0-73fe-45d5-870b-a63931a4ef12-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d4f92bb0-73fe-45d5-870b-a63931a4ef12" (UID: "d4f92bb0-73fe-45d5-870b-a63931a4ef12"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361165 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361229 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361256 4720 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4f92bb0-73fe-45d5-870b-a63931a4ef12-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361276 4720 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4f92bb0-73fe-45d5-870b-a63931a4ef12-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361293 4720 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361329 4720 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4f92bb0-73fe-45d5-870b-a63931a4ef12-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.361347 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75kx4\" (UniqueName: \"kubernetes.io/projected/d4f92bb0-73fe-45d5-870b-a63931a4ef12-kube-api-access-75kx4\") on node \"crc\" DevicePath \"\"" Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.453659 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bqg82"] Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.457578 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bqg82"] Feb 02 09:03:00 crc kubenswrapper[4720]: I0202 09:03:00.900497 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f92bb0-73fe-45d5-870b-a63931a4ef12" path="/var/lib/kubelet/pods/d4f92bb0-73fe-45d5-870b-a63931a4ef12/volumes" Feb 02 09:03:17 crc kubenswrapper[4720]: I0202 09:03:17.902393 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:03:17 crc kubenswrapper[4720]: I0202 09:03:17.903287 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:03:17 crc kubenswrapper[4720]: I0202 09:03:17.903354 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:03:17 crc kubenswrapper[4720]: I0202 09:03:17.904599 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf9cee7501a3e05d287afb0cc452abbcbcb1f5a250c109df332e30e51785cbdc"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:03:17 crc kubenswrapper[4720]: I0202 09:03:17.904736 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://cf9cee7501a3e05d287afb0cc452abbcbcb1f5a250c109df332e30e51785cbdc" gracePeriod=600 Feb 02 09:03:18 crc kubenswrapper[4720]: I0202 09:03:18.230829 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="cf9cee7501a3e05d287afb0cc452abbcbcb1f5a250c109df332e30e51785cbdc" exitCode=0 Feb 02 09:03:18 crc kubenswrapper[4720]: I0202 09:03:18.230927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"cf9cee7501a3e05d287afb0cc452abbcbcb1f5a250c109df332e30e51785cbdc"} Feb 02 09:03:18 crc kubenswrapper[4720]: I0202 09:03:18.231525 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"521bb730c5dd5e002a88325f9a8d584eba2dba3a48c825467f6b1fc97674c2ae"} Feb 02 09:03:18 crc kubenswrapper[4720]: I0202 09:03:18.231680 4720 scope.go:117] "RemoveContainer" containerID="582a4e0ae0c6e6fd77d7b537be0f4e52603ecbd7d252892e9549ad8b35df03f7" Feb 02 09:05:07 crc kubenswrapper[4720]: I0202 09:05:07.069418 4720 scope.go:117] "RemoveContainer" containerID="7f0507baefdd68255014f2c93f95df985ebf86be92cc10249bbfd63cb0410fe8" Feb 02 09:05:47 crc kubenswrapper[4720]: I0202 09:05:47.902451 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:05:47 crc kubenswrapper[4720]: I0202 09:05:47.903158 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.697874 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2"] Feb 02 09:06:05 crc kubenswrapper[4720]: E0202 09:06:05.698671 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f92bb0-73fe-45d5-870b-a63931a4ef12" containerName="registry" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.698687 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f92bb0-73fe-45d5-870b-a63931a4ef12" containerName="registry" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.698798 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f92bb0-73fe-45d5-870b-a63931a4ef12" containerName="registry" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.699231 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.701816 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.718986 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.720561 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xm9kj" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.751241 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2"] Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.763053 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7bc8x"] Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.763935 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7bc8x" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.766018 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-phqgl" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.773970 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7bc8x"] Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.780680 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-twh55"] Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.781418 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.784039 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mk5j8" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.796324 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-twh55"] Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.852842 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjnp\" (UniqueName: \"kubernetes.io/projected/8cd38fa1-b879-4abd-86e5-3d9fd6847c6a-kube-api-access-vjjnp\") pod \"cert-manager-cainjector-cf98fcc89-dsxr2\" (UID: \"8cd38fa1-b879-4abd-86e5-3d9fd6847c6a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.953774 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gv6\" (UniqueName: \"kubernetes.io/projected/5e86cac4-2acd-49d8-b01e-ef7becdce359-kube-api-access-l6gv6\") pod \"cert-manager-webhook-687f57d79b-twh55\" (UID: \"5e86cac4-2acd-49d8-b01e-ef7becdce359\") " pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.953828 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjnp\" (UniqueName: \"kubernetes.io/projected/8cd38fa1-b879-4abd-86e5-3d9fd6847c6a-kube-api-access-vjjnp\") pod \"cert-manager-cainjector-cf98fcc89-dsxr2\" (UID: \"8cd38fa1-b879-4abd-86e5-3d9fd6847c6a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.953874 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fphzp\" (UniqueName: \"kubernetes.io/projected/1dbc9be1-9930-49c6-a8e1-0767194f295f-kube-api-access-fphzp\") pod \"cert-manager-858654f9db-7bc8x\" (UID: \"1dbc9be1-9930-49c6-a8e1-0767194f295f\") " pod="cert-manager/cert-manager-858654f9db-7bc8x" Feb 02 09:06:05 crc kubenswrapper[4720]: I0202 09:06:05.981133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjnp\" (UniqueName: \"kubernetes.io/projected/8cd38fa1-b879-4abd-86e5-3d9fd6847c6a-kube-api-access-vjjnp\") pod \"cert-manager-cainjector-cf98fcc89-dsxr2\" (UID: \"8cd38fa1-b879-4abd-86e5-3d9fd6847c6a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.037334 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.055167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6gv6\" (UniqueName: \"kubernetes.io/projected/5e86cac4-2acd-49d8-b01e-ef7becdce359-kube-api-access-l6gv6\") pod \"cert-manager-webhook-687f57d79b-twh55\" (UID: \"5e86cac4-2acd-49d8-b01e-ef7becdce359\") " pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.055271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fphzp\" (UniqueName: \"kubernetes.io/projected/1dbc9be1-9930-49c6-a8e1-0767194f295f-kube-api-access-fphzp\") pod \"cert-manager-858654f9db-7bc8x\" (UID: \"1dbc9be1-9930-49c6-a8e1-0767194f295f\") " pod="cert-manager/cert-manager-858654f9db-7bc8x" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.097256 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fphzp\" (UniqueName: \"kubernetes.io/projected/1dbc9be1-9930-49c6-a8e1-0767194f295f-kube-api-access-fphzp\") pod \"cert-manager-858654f9db-7bc8x\" (UID: \"1dbc9be1-9930-49c6-a8e1-0767194f295f\") " pod="cert-manager/cert-manager-858654f9db-7bc8x" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.097495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6gv6\" (UniqueName: \"kubernetes.io/projected/5e86cac4-2acd-49d8-b01e-ef7becdce359-kube-api-access-l6gv6\") pod \"cert-manager-webhook-687f57d79b-twh55\" (UID: \"5e86cac4-2acd-49d8-b01e-ef7becdce359\") " pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.253681 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2"] Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.264816 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.376038 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7bc8x" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.396318 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.418582 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" event={"ID":"8cd38fa1-b879-4abd-86e5-3d9fd6847c6a","Type":"ContainerStarted","Data":"ac8b4010217e4bf543ad3d56a543ec77dde28f1aeb50b5d40c75be3d0c267757"} Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.662939 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7bc8x"] Feb 02 09:06:06 crc kubenswrapper[4720]: W0202 09:06:06.669741 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dbc9be1_9930_49c6_a8e1_0767194f295f.slice/crio-6756db067c8ef3dcb271e032907c7dc47566e346256c63f44cf6461f92b4d9d4 WatchSource:0}: Error finding container 6756db067c8ef3dcb271e032907c7dc47566e346256c63f44cf6461f92b4d9d4: Status 404 returned error can't find the container with id 6756db067c8ef3dcb271e032907c7dc47566e346256c63f44cf6461f92b4d9d4 Feb 02 09:06:06 crc kubenswrapper[4720]: I0202 09:06:06.722109 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-twh55"] Feb 02 09:06:06 crc kubenswrapper[4720]: W0202 09:06:06.733172 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e86cac4_2acd_49d8_b01e_ef7becdce359.slice/crio-959c6b74c9d041667399b634c93d0aa238c2a64de1b3c68db507ae91b20dbcc7 WatchSource:0}: Error finding container 959c6b74c9d041667399b634c93d0aa238c2a64de1b3c68db507ae91b20dbcc7: Status 404 returned error can't find the container with id 959c6b74c9d041667399b634c93d0aa238c2a64de1b3c68db507ae91b20dbcc7 Feb 02 09:06:07 crc kubenswrapper[4720]: I0202 09:06:07.425688 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" event={"ID":"5e86cac4-2acd-49d8-b01e-ef7becdce359","Type":"ContainerStarted","Data":"959c6b74c9d041667399b634c93d0aa238c2a64de1b3c68db507ae91b20dbcc7"} Feb 02 09:06:07 crc kubenswrapper[4720]: I0202 09:06:07.427131 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7bc8x" event={"ID":"1dbc9be1-9930-49c6-a8e1-0767194f295f","Type":"ContainerStarted","Data":"6756db067c8ef3dcb271e032907c7dc47566e346256c63f44cf6461f92b4d9d4"} Feb 02 09:06:09 crc kubenswrapper[4720]: I0202 09:06:09.438624 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" event={"ID":"8cd38fa1-b879-4abd-86e5-3d9fd6847c6a","Type":"ContainerStarted","Data":"535988b45ce87f7ba427dd4043ffe3c864f7bfda2e25360d3d94c63d1e2e4cad"} Feb 02 09:06:09 crc kubenswrapper[4720]: I0202 09:06:09.452493 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dsxr2" podStartSLOduration=1.931239202 podStartE2EDuration="4.452479862s" podCreationTimestamp="2026-02-02 09:06:05 +0000 UTC" firstStartedPulling="2026-02-02 09:06:06.264489981 +0000 UTC m=+600.120115547" lastFinishedPulling="2026-02-02 09:06:08.785730651 +0000 UTC m=+602.641356207" observedRunningTime="2026-02-02 09:06:09.449980338 +0000 UTC m=+603.305605894" watchObservedRunningTime="2026-02-02 09:06:09.452479862 +0000 UTC m=+603.308105418" Feb 02 09:06:11 crc kubenswrapper[4720]: I0202 09:06:11.451204 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" event={"ID":"5e86cac4-2acd-49d8-b01e-ef7becdce359","Type":"ContainerStarted","Data":"72e7cb55d02845b9064622b0054a661a73c3c34a339cc15502196fe89f350a98"} Feb 02 09:06:11 crc kubenswrapper[4720]: I0202 09:06:11.452220 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:11 crc kubenswrapper[4720]: I0202 09:06:11.455119 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7bc8x" event={"ID":"1dbc9be1-9930-49c6-a8e1-0767194f295f","Type":"ContainerStarted","Data":"f19137317f59dd65542dd244102f43c128793ea970c9db6a3a9087b3c3270561"} Feb 02 09:06:11 crc kubenswrapper[4720]: I0202 09:06:11.481822 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" podStartSLOduration=2.940376214 podStartE2EDuration="6.481793348s" podCreationTimestamp="2026-02-02 09:06:05 +0000 UTC" firstStartedPulling="2026-02-02 09:06:06.736488764 +0000 UTC m=+600.592114320" lastFinishedPulling="2026-02-02 09:06:10.277905908 +0000 UTC m=+604.133531454" observedRunningTime="2026-02-02 09:06:11.474621925 +0000 UTC m=+605.330247521" watchObservedRunningTime="2026-02-02 09:06:11.481793348 +0000 UTC m=+605.337418924" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.116823 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7bc8x" podStartSLOduration=6.4441860779999995 podStartE2EDuration="10.116783525s" podCreationTimestamp="2026-02-02 09:06:05 +0000 UTC" firstStartedPulling="2026-02-02 09:06:06.672485208 +0000 UTC m=+600.528110764" lastFinishedPulling="2026-02-02 09:06:10.345082645 +0000 UTC m=+604.200708211" observedRunningTime="2026-02-02 09:06:11.494424391 +0000 UTC m=+605.350049957" watchObservedRunningTime="2026-02-02 09:06:15.116783525 +0000 UTC m=+608.972409121" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.122704 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrwzp"] Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.123544 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-controller" containerID="cri-o://25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.123823 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="nbdb" containerID="cri-o://de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.124038 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="northd" containerID="cri-o://54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.124248 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.123601 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="sbdb" containerID="cri-o://3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.124328 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-node" containerID="cri-o://325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.124303 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-acl-logging" containerID="cri-o://c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.177752 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" containerID="cri-o://1199a8ef90482788a5fb7472156bbf633191d6af67369e6da58d8fd34a6aedc0" gracePeriod=30 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.484256 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovnkube-controller/3.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.487719 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovn-acl-logging/0.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.488777 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovn-controller/0.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489516 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="1199a8ef90482788a5fb7472156bbf633191d6af67369e6da58d8fd34a6aedc0" exitCode=0 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489559 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf" exitCode=0 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489573 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36" exitCode=0 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489588 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d" exitCode=0 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489605 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6" exitCode=0 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489620 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97" exitCode=0 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489634 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617" exitCode=143 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489647 4720 generic.go:334] "Generic (PLEG): container finished" podID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerID="25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87" exitCode=143 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"1199a8ef90482788a5fb7472156bbf633191d6af67369e6da58d8fd34a6aedc0"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489840 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489866 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489911 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489930 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489966 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.489983 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.490000 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" event={"ID":"8f50847b-84da-40bb-9cc3-7ddb139f6c0e","Type":"ContainerDied","Data":"dfd3a63dc5725b10cbe7ae38aaad6d10cf791ad7316c8dd00ce1d6bf348208bf"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.490018 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd3a63dc5725b10cbe7ae38aaad6d10cf791ad7316c8dd00ce1d6bf348208bf" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.490047 4720 scope.go:117] "RemoveContainer" containerID="2957ad418f04dbfab8e2a2e479dc1882b67dcb62c93cce40995a4d2f4c76b7a1" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.494517 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/2.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.495365 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/1.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.495422 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd3c075e-27ea-4a49-b3bc-0bd6ca79c764" containerID="79e315e1e388c3b54029e31eb47747d74c0304a7af58ae56f37f2c4d2e324545" exitCode=2 Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.495457 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerDied","Data":"79e315e1e388c3b54029e31eb47747d74c0304a7af58ae56f37f2c4d2e324545"} Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.496029 4720 scope.go:117] "RemoveContainer" containerID="79e315e1e388c3b54029e31eb47747d74c0304a7af58ae56f37f2c4d2e324545" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.496323 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ft6vx_openshift-multus(cd3c075e-27ea-4a49-b3bc-0bd6ca79c764)\"" pod="openshift-multus/multus-ft6vx" podUID="cd3c075e-27ea-4a49-b3bc-0bd6ca79c764" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.503130 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovn-acl-logging/0.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.503856 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovn-controller/0.log" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.504516 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.528649 4720 scope.go:117] "RemoveContainer" containerID="2832355265d72092b5aec854952d2096ebfa6bc5be020a7283114977c9deeb36" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.584312 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6569q"] Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.584793 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.584915 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585009 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="nbdb" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585080 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="nbdb" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585151 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585215 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585295 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="northd" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585361 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="northd" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585516 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585595 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585634 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kubecfg-setup" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585645 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kubecfg-setup" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585662 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-acl-logging" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585671 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-acl-logging" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585701 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585710 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585720 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-node" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585727 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-node" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585766 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="sbdb" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585775 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="sbdb" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.585790 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.585798 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586133 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="nbdb" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586155 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="sbdb" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586166 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586177 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586187 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="northd" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586197 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586219 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586228 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="kube-rbac-proxy-node" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586237 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586245 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovn-acl-logging" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.586388 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586402 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586536 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: E0202 09:06:15.586670 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586689 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.586808 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" containerName="ovnkube-controller" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.589004 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688305 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688388 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-node-log\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-etc-openvswitch\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688472 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-systemd-units\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688511 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-netns\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688553 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovn-node-metrics-cert\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688627 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-ovn\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688659 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-log-socket\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688700 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-var-lib-openvswitch\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688782 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-openvswitch\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688826 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-ovn-kubernetes\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688877 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-config\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688935 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-netd\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.688974 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-script-lib\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689023 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjcd2\" (UniqueName: \"kubernetes.io/projected/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-kube-api-access-mjcd2\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-env-overrides\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689102 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-kubelet\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689111 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689137 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-systemd\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689116 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689239 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-slash\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689272 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-bin\") pod \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\" (UID: \"8f50847b-84da-40bb-9cc3-7ddb139f6c0e\") " Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689217 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689403 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689246 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689282 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689448 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689493 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689501 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-log-socket" (OuterVolumeSpecName: "log-socket") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689336 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-node-log" (OuterVolumeSpecName: "node-log") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689515 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689390 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-slash" (OuterVolumeSpecName: "host-slash") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689290 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689599 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689705 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-cni-netd\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-var-lib-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689794 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06687423-7f29-4324-895b-0a3458ec5e18-ovn-node-metrics-cert\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689814 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-ovnkube-config\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.689931 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690048 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-log-socket\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690107 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-systemd\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690239 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690264 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-cni-bin\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690377 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-run-netns\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690414 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690455 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-run-ovn-kubernetes\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690642 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-node-log\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-env-overrides\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690760 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-ovn\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690808 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-ovnkube-script-lib\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690924 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-kubelet\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.690960 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-etc-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-systemd-units\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691151 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqwt\" (UniqueName: \"kubernetes.io/projected/06687423-7f29-4324-895b-0a3458ec5e18-kube-api-access-hgqwt\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691202 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-slash\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691308 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691331 4720 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691349 4720 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691369 4720 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691389 4720 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691408 4720 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691426 4720 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691445 4720 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691462 4720 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691478 4720 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691493 4720 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691510 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691529 4720 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691546 4720 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691563 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691578 4720 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.691594 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.697492 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-kube-api-access-mjcd2" (OuterVolumeSpecName: "kube-api-access-mjcd2") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "kube-api-access-mjcd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.699067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.714289 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8f50847b-84da-40bb-9cc3-7ddb139f6c0e" (UID: "8f50847b-84da-40bb-9cc3-7ddb139f6c0e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.793658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-ovn\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.793757 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-ovnkube-script-lib\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.793811 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-kubelet\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.793844 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-etc-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.793979 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-systemd-units\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-ovn\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794083 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-kubelet\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794117 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-etc-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794034 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgqwt\" (UniqueName: \"kubernetes.io/projected/06687423-7f29-4324-895b-0a3458ec5e18-kube-api-access-hgqwt\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-systemd-units\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794203 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-slash\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794237 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-slash\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794254 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-cni-netd\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794290 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-cni-netd\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794320 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-var-lib-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794366 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06687423-7f29-4324-895b-0a3458ec5e18-ovn-node-metrics-cert\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-ovnkube-config\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794440 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-systemd\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-log-socket\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794532 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794560 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-cni-bin\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794605 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-run-netns\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794636 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794675 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-run-ovn-kubernetes\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794716 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-node-log\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794768 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-env-overrides\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-var-lib-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794944 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-run-netns\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794962 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-run-ovn-kubernetes\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.794993 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-systemd\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795010 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-cni-bin\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795165 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795216 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-log-socket\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795215 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-run-openvswitch\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795248 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795281 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06687423-7f29-4324-895b-0a3458ec5e18-node-log\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795354 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-ovnkube-script-lib\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795422 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjcd2\" (UniqueName: \"kubernetes.io/projected/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-kube-api-access-mjcd2\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.795454 4720 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8f50847b-84da-40bb-9cc3-7ddb139f6c0e-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.796271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-env-overrides\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.796285 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06687423-7f29-4324-895b-0a3458ec5e18-ovnkube-config\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.800828 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06687423-7f29-4324-895b-0a3458ec5e18-ovn-node-metrics-cert\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.820805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgqwt\" (UniqueName: \"kubernetes.io/projected/06687423-7f29-4324-895b-0a3458ec5e18-kube-api-access-hgqwt\") pod \"ovnkube-node-6569q\" (UID: \"06687423-7f29-4324-895b-0a3458ec5e18\") " pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: I0202 09:06:15.918551 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:15 crc kubenswrapper[4720]: W0202 09:06:15.956722 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06687423_7f29_4324_895b_0a3458ec5e18.slice/crio-bd4bb364208df1e646e02a33b68ce7580acad6c71f584aec324c1ab5dc930522 WatchSource:0}: Error finding container bd4bb364208df1e646e02a33b68ce7580acad6c71f584aec324c1ab5dc930522: Status 404 returned error can't find the container with id bd4bb364208df1e646e02a33b68ce7580acad6c71f584aec324c1ab5dc930522 Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.400472 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-twh55" Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.501759 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/2.log" Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.503297 4720 generic.go:334] "Generic (PLEG): container finished" podID="06687423-7f29-4324-895b-0a3458ec5e18" containerID="f2328fdc30080b72b9db9132d13766f096979549dad5eeb04e2a7f7027591e6b" exitCode=0 Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.503368 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerDied","Data":"f2328fdc30080b72b9db9132d13766f096979549dad5eeb04e2a7f7027591e6b"} Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.503406 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"bd4bb364208df1e646e02a33b68ce7580acad6c71f584aec324c1ab5dc930522"} Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.508133 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovn-acl-logging/0.log" Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.508735 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mrwzp_8f50847b-84da-40bb-9cc3-7ddb139f6c0e/ovn-controller/0.log" Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.509218 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mrwzp" Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.601852 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrwzp"] Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.605754 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mrwzp"] Feb 02 09:06:16 crc kubenswrapper[4720]: I0202 09:06:16.897869 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f50847b-84da-40bb-9cc3-7ddb139f6c0e" path="/var/lib/kubelet/pods/8f50847b-84da-40bb-9cc3-7ddb139f6c0e/volumes" Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.518211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"0681a188d4d8498b95a486eb90cbbff08181147688d5435c5a7917da1f428855"} Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.518541 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"53815362f9e5f2512d732f39acda559b3e8e612c5da54d4edeccde916cf0845b"} Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.518553 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"79890619f69e98d803c16396714e420b6a78e3e582515f88852e692c57a7fd28"} Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.518562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"42df1428710e6c7a2008eb92c0adbda10efafc11498e39a4e596cae347088d8e"} Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.518570 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"6eaa0e81c2b33812bb1e4d390c97c72637e6b2d473b318129c7ae3f013c59011"} Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.518579 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"37d488c1cf83f11d1afb219a5199ec65a9aeae447455f821ab1decc369ba05b2"} Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.902405 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:06:17 crc kubenswrapper[4720]: I0202 09:06:17.902483 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:06:20 crc kubenswrapper[4720]: I0202 09:06:20.552802 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"3805772280cb15723afbd67d7adcf03ce79d813f53656427caf7e7e67940d7c8"} Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.569475 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" event={"ID":"06687423-7f29-4324-895b-0a3458ec5e18","Type":"ContainerStarted","Data":"66e82d0d3d6370d69e5250cab93ff6e551b575072067b06263d6b9a5a98b047e"} Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.570225 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.570276 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.570290 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.598616 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" podStartSLOduration=7.598598442 podStartE2EDuration="7.598598442s" podCreationTimestamp="2026-02-02 09:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:06:22.595959344 +0000 UTC m=+616.451584920" watchObservedRunningTime="2026-02-02 09:06:22.598598442 +0000 UTC m=+616.454224008" Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.605555 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:22 crc kubenswrapper[4720]: I0202 09:06:22.611795 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:29 crc kubenswrapper[4720]: I0202 09:06:29.887269 4720 scope.go:117] "RemoveContainer" containerID="79e315e1e388c3b54029e31eb47747d74c0304a7af58ae56f37f2c4d2e324545" Feb 02 09:06:29 crc kubenswrapper[4720]: E0202 09:06:29.888873 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ft6vx_openshift-multus(cd3c075e-27ea-4a49-b3bc-0bd6ca79c764)\"" pod="openshift-multus/multus-ft6vx" podUID="cd3c075e-27ea-4a49-b3bc-0bd6ca79c764" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.565453 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph"] Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.568968 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.571692 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9q7dt" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.572483 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.573478 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.713215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-data\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.713465 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-run\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.713741 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx65k\" (UniqueName: \"kubernetes.io/projected/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-kube-api-access-hx65k\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.713936 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-log\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.814988 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx65k\" (UniqueName: \"kubernetes.io/projected/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-kube-api-access-hx65k\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.815021 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-log\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.815049 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-data\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.815063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-run\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.815408 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-run\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.815934 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-log\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.816427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-data\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.837554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx65k\" (UniqueName: \"kubernetes.io/projected/2fe71d97-adbd-42c9-91b6-eaf03ad200f0-kube-api-access-hx65k\") pod \"ceph\" (UID: \"2fe71d97-adbd-42c9-91b6-eaf03ad200f0\") " pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: I0202 09:06:40.890178 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 02 09:06:40 crc kubenswrapper[4720]: W0202 09:06:40.926331 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe71d97_adbd_42c9_91b6_eaf03ad200f0.slice/crio-4eb2704f8c7d12aac857363d4621b9cefa27e86c39867abd7c7e9a2119188a45 WatchSource:0}: Error finding container 4eb2704f8c7d12aac857363d4621b9cefa27e86c39867abd7c7e9a2119188a45: Status 404 returned error can't find the container with id 4eb2704f8c7d12aac857363d4621b9cefa27e86c39867abd7c7e9a2119188a45 Feb 02 09:06:40 crc kubenswrapper[4720]: E0202 09:06:40.961119 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:40 crc kubenswrapper[4720]: E0202 09:06:40.980306 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:41 crc kubenswrapper[4720]: I0202 09:06:41.730200 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"2fe71d97-adbd-42c9-91b6-eaf03ad200f0","Type":"ContainerStarted","Data":"4eb2704f8c7d12aac857363d4621b9cefa27e86c39867abd7c7e9a2119188a45"} Feb 02 09:06:41 crc kubenswrapper[4720]: I0202 09:06:41.888016 4720 scope.go:117] "RemoveContainer" containerID="79e315e1e388c3b54029e31eb47747d74c0304a7af58ae56f37f2c4d2e324545" Feb 02 09:06:42 crc kubenswrapper[4720]: E0202 09:06:42.123092 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:42 crc kubenswrapper[4720]: E0202 09:06:42.150358 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:42 crc kubenswrapper[4720]: I0202 09:06:42.737517 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ft6vx_cd3c075e-27ea-4a49-b3bc-0bd6ca79c764/kube-multus/2.log" Feb 02 09:06:42 crc kubenswrapper[4720]: I0202 09:06:42.737580 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ft6vx" event={"ID":"cd3c075e-27ea-4a49-b3bc-0bd6ca79c764","Type":"ContainerStarted","Data":"bc3944d55be6f9b2b0b19e588d0a6b111c21c7916dacdebcdf1c4564d94b6b82"} Feb 02 09:06:43 crc kubenswrapper[4720]: E0202 09:06:43.311711 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:43 crc kubenswrapper[4720]: E0202 09:06:43.324119 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:44 crc kubenswrapper[4720]: E0202 09:06:44.495029 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:44 crc kubenswrapper[4720]: E0202 09:06:44.508245 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:45 crc kubenswrapper[4720]: E0202 09:06:45.663489 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:45 crc kubenswrapper[4720]: E0202 09:06:45.675142 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:45 crc kubenswrapper[4720]: I0202 09:06:45.939616 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6569q" Feb 02 09:06:46 crc kubenswrapper[4720]: E0202 09:06:46.841384 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:46 crc kubenswrapper[4720]: E0202 09:06:46.869154 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:47 crc kubenswrapper[4720]: I0202 09:06:47.901474 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:06:47 crc kubenswrapper[4720]: I0202 09:06:47.901551 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:06:47 crc kubenswrapper[4720]: I0202 09:06:47.901615 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:06:47 crc kubenswrapper[4720]: I0202 09:06:47.903031 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"521bb730c5dd5e002a88325f9a8d584eba2dba3a48c825467f6b1fc97674c2ae"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:06:47 crc kubenswrapper[4720]: I0202 09:06:47.903113 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://521bb730c5dd5e002a88325f9a8d584eba2dba3a48c825467f6b1fc97674c2ae" gracePeriod=600 Feb 02 09:06:48 crc kubenswrapper[4720]: E0202 09:06:48.026485 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:48 crc kubenswrapper[4720]: E0202 09:06:48.041546 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:49 crc kubenswrapper[4720]: E0202 09:06:49.218362 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:49 crc kubenswrapper[4720]: E0202 09:06:49.232811 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:49 crc kubenswrapper[4720]: I0202 09:06:49.782964 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="521bb730c5dd5e002a88325f9a8d584eba2dba3a48c825467f6b1fc97674c2ae" exitCode=0 Feb 02 09:06:49 crc kubenswrapper[4720]: I0202 09:06:49.783052 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"521bb730c5dd5e002a88325f9a8d584eba2dba3a48c825467f6b1fc97674c2ae"} Feb 02 09:06:49 crc kubenswrapper[4720]: I0202 09:06:49.783274 4720 scope.go:117] "RemoveContainer" containerID="cf9cee7501a3e05d287afb0cc452abbcbcb1f5a250c109df332e30e51785cbdc" Feb 02 09:06:50 crc kubenswrapper[4720]: E0202 09:06:50.398144 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:50 crc kubenswrapper[4720]: E0202 09:06:50.410390 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:51 crc kubenswrapper[4720]: E0202 09:06:51.562569 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:51 crc kubenswrapper[4720]: E0202 09:06:51.577998 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:52 crc kubenswrapper[4720]: E0202 09:06:52.739324 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:52 crc kubenswrapper[4720]: E0202 09:06:52.761019 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:53 crc kubenswrapper[4720]: E0202 09:06:53.936024 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:53 crc kubenswrapper[4720]: E0202 09:06:53.954700 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:55 crc kubenswrapper[4720]: E0202 09:06:55.120732 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:55 crc kubenswrapper[4720]: E0202 09:06:55.139779 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:56 crc kubenswrapper[4720]: E0202 09:06:56.329303 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:56 crc kubenswrapper[4720]: E0202 09:06:56.347715 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:58 crc kubenswrapper[4720]: E0202 09:06:58.027202 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:58 crc kubenswrapper[4720]: E0202 09:06:58.051021 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:59 crc kubenswrapper[4720]: E0202 09:06:59.201309 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:06:59 crc kubenswrapper[4720]: E0202 09:06:59.222778 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:00 crc kubenswrapper[4720]: E0202 09:07:00.365058 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Feb 02 09:07:00 crc kubenswrapper[4720]: E0202 09:07:00.365488 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx65k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_openstack(2fe71d97-adbd-42c9-91b6-eaf03ad200f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:07:00 crc kubenswrapper[4720]: E0202 09:07:00.366850 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceph" podUID="2fe71d97-adbd-42c9-91b6-eaf03ad200f0" Feb 02 09:07:00 crc kubenswrapper[4720]: E0202 09:07:00.394122 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:00 crc kubenswrapper[4720]: E0202 09:07:00.411270 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:00 crc kubenswrapper[4720]: I0202 09:07:00.871160 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"08c9c2a3c22cfda2f1813f2d513a474efb4e6630c2e4cb574188e46dafd49a3d"} Feb 02 09:07:00 crc kubenswrapper[4720]: E0202 09:07:00.873778 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="openstack/ceph" podUID="2fe71d97-adbd-42c9-91b6-eaf03ad200f0" Feb 02 09:07:01 crc kubenswrapper[4720]: E0202 09:07:01.594946 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:01 crc kubenswrapper[4720]: E0202 09:07:01.613951 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:02 crc kubenswrapper[4720]: E0202 09:07:02.798154 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:02 crc kubenswrapper[4720]: E0202 09:07:02.821161 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:04 crc kubenswrapper[4720]: E0202 09:07:04.007745 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:04 crc kubenswrapper[4720]: E0202 09:07:04.027698 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:05 crc kubenswrapper[4720]: E0202 09:07:05.203299 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:05 crc kubenswrapper[4720]: E0202 09:07:05.224654 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:06 crc kubenswrapper[4720]: E0202 09:07:06.395972 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:06 crc kubenswrapper[4720]: E0202 09:07:06.417512 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.141533 4720 scope.go:117] "RemoveContainer" containerID="d667f1681509ae00c5fae9e989bcd3ce6d03ff55fae55b2937f99dc708f34cd6" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.166173 4720 scope.go:117] "RemoveContainer" containerID="c085aea1756c1331d3317711dcaead25a68d9836212b625d5b5b9ec55fd71617" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.185809 4720 scope.go:117] "RemoveContainer" containerID="1199a8ef90482788a5fb7472156bbf633191d6af67369e6da58d8fd34a6aedc0" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.211006 4720 scope.go:117] "RemoveContainer" containerID="de422443d69e0a2426ddc077e533f66c3cb111035af9c91855255a9ae4b8fa36" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.231601 4720 scope.go:117] "RemoveContainer" containerID="3959915e205b2154bd8fefbe2f25b933bfac2cbcc1de2472d418635c7c862ddf" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.256101 4720 scope.go:117] "RemoveContainer" containerID="54b7b400c441c5d73e74b6f448ebde2cf6c78f758de5ba607c44953e5a443e0d" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.279295 4720 scope.go:117] "RemoveContainer" containerID="c65dd503136651492939e1e2d37e23b9df1deea0b35db60831725e900250cd97" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.303144 4720 scope.go:117] "RemoveContainer" containerID="325add3d62ad7b7cb5eee4a1cf7165b8640efcfd7b6b0aa6547cd8148b056f97" Feb 02 09:07:07 crc kubenswrapper[4720]: I0202 09:07:07.366605 4720 scope.go:117] "RemoveContainer" containerID="25f0e2afc9046217115131f6034f1796eba8882d77dbe675440d6b494870df87" Feb 02 09:07:07 crc kubenswrapper[4720]: E0202 09:07:07.589914 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:07 crc kubenswrapper[4720]: E0202 09:07:07.611553 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:08 crc kubenswrapper[4720]: E0202 09:07:08.784572 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:08 crc kubenswrapper[4720]: E0202 09:07:08.806728 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:10 crc kubenswrapper[4720]: E0202 09:07:10.002733 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:10 crc kubenswrapper[4720]: E0202 09:07:10.024041 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:11 crc kubenswrapper[4720]: E0202 09:07:11.191974 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:11 crc kubenswrapper[4720]: E0202 09:07:11.210521 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:12 crc kubenswrapper[4720]: E0202 09:07:12.374858 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:12 crc kubenswrapper[4720]: E0202 09:07:12.396642 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:12 crc kubenswrapper[4720]: I0202 09:07:12.960652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"2fe71d97-adbd-42c9-91b6-eaf03ad200f0","Type":"ContainerStarted","Data":"8860d786868dc82e2051bc95c9d5dbf63cf3fba7d86c9ff8dcc0ef19e2ddf779"} Feb 02 09:07:12 crc kubenswrapper[4720]: I0202 09:07:12.982631 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph" podStartSLOduration=1.602666649 podStartE2EDuration="32.982610359s" podCreationTimestamp="2026-02-02 09:06:40 +0000 UTC" firstStartedPulling="2026-02-02 09:06:40.928743308 +0000 UTC m=+634.784368904" lastFinishedPulling="2026-02-02 09:07:12.308687048 +0000 UTC m=+666.164312614" observedRunningTime="2026-02-02 09:07:12.979969022 +0000 UTC m=+666.835594588" watchObservedRunningTime="2026-02-02 09:07:12.982610359 +0000 UTC m=+666.838235925" Feb 02 09:07:13 crc kubenswrapper[4720]: E0202 09:07:13.569957 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:13 crc kubenswrapper[4720]: E0202 09:07:13.594205 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:14 crc kubenswrapper[4720]: E0202 09:07:14.787512 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:14 crc kubenswrapper[4720]: E0202 09:07:14.803426 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:16 crc kubenswrapper[4720]: E0202 09:07:16.020341 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:16 crc kubenswrapper[4720]: E0202 09:07:16.042084 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:17 crc kubenswrapper[4720]: E0202 09:07:17.255315 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:17 crc kubenswrapper[4720]: E0202 09:07:17.278370 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:18 crc kubenswrapper[4720]: E0202 09:07:18.455129 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:18 crc kubenswrapper[4720]: E0202 09:07:18.476165 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:19 crc kubenswrapper[4720]: E0202 09:07:19.682979 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:19 crc kubenswrapper[4720]: E0202 09:07:19.704573 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:20 crc kubenswrapper[4720]: E0202 09:07:20.914176 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:20 crc kubenswrapper[4720]: E0202 09:07:20.938370 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:22 crc kubenswrapper[4720]: E0202 09:07:22.150636 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:22 crc kubenswrapper[4720]: E0202 09:07:22.173205 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:23 crc kubenswrapper[4720]: E0202 09:07:23.354824 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:23 crc kubenswrapper[4720]: E0202 09:07:23.377321 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:24 crc kubenswrapper[4720]: E0202 09:07:24.526469 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:24 crc kubenswrapper[4720]: E0202 09:07:24.549999 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:25 crc kubenswrapper[4720]: E0202 09:07:25.736645 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:25 crc kubenswrapper[4720]: E0202 09:07:25.757868 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:26 crc kubenswrapper[4720]: E0202 09:07:26.957148 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:26 crc kubenswrapper[4720]: E0202 09:07:26.980725 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:28 crc kubenswrapper[4720]: E0202 09:07:28.185376 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:28 crc kubenswrapper[4720]: E0202 09:07:28.203814 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:29 crc kubenswrapper[4720]: E0202 09:07:29.368826 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:29 crc kubenswrapper[4720]: E0202 09:07:29.390691 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:30 crc kubenswrapper[4720]: E0202 09:07:30.609996 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:30 crc kubenswrapper[4720]: E0202 09:07:30.630388 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:31 crc kubenswrapper[4720]: E0202 09:07:31.813508 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:31 crc kubenswrapper[4720]: E0202 09:07:31.837067 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:33 crc kubenswrapper[4720]: E0202 09:07:33.066444 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:33 crc kubenswrapper[4720]: E0202 09:07:33.085799 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:34 crc kubenswrapper[4720]: E0202 09:07:34.248629 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:34 crc kubenswrapper[4720]: E0202 09:07:34.264126 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:35 crc kubenswrapper[4720]: E0202 09:07:35.412223 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:35 crc kubenswrapper[4720]: E0202 09:07:35.431041 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:36 crc kubenswrapper[4720]: E0202 09:07:36.586414 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:36 crc kubenswrapper[4720]: E0202 09:07:36.607496 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:37 crc kubenswrapper[4720]: E0202 09:07:37.759709 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:37 crc kubenswrapper[4720]: E0202 09:07:37.782829 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:38 crc kubenswrapper[4720]: E0202 09:07:38.986284 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:39 crc kubenswrapper[4720]: E0202 09:07:39.007180 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:40 crc kubenswrapper[4720]: E0202 09:07:40.221609 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:40 crc kubenswrapper[4720]: E0202 09:07:40.243512 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:41 crc kubenswrapper[4720]: E0202 09:07:41.403284 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:41 crc kubenswrapper[4720]: E0202 09:07:41.426521 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:42 crc kubenswrapper[4720]: E0202 09:07:42.582965 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:42 crc kubenswrapper[4720]: E0202 09:07:42.602808 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:43 crc kubenswrapper[4720]: E0202 09:07:43.748043 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:43 crc kubenswrapper[4720]: E0202 09:07:43.771759 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:44 crc kubenswrapper[4720]: E0202 09:07:44.933847 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:44 crc kubenswrapper[4720]: E0202 09:07:44.954790 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:46 crc kubenswrapper[4720]: E0202 09:07:46.167033 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:46 crc kubenswrapper[4720]: E0202 09:07:46.188851 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:47 crc kubenswrapper[4720]: E0202 09:07:47.364675 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:47 crc kubenswrapper[4720]: E0202 09:07:47.387968 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:48 crc kubenswrapper[4720]: E0202 09:07:48.552536 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:48 crc kubenswrapper[4720]: E0202 09:07:48.577398 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:50 crc kubenswrapper[4720]: E0202 09:07:50.487952 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:50 crc kubenswrapper[4720]: E0202 09:07:50.505454 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:51 crc kubenswrapper[4720]: E0202 09:07:51.671225 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:51 crc kubenswrapper[4720]: E0202 09:07:51.688354 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:52 crc kubenswrapper[4720]: E0202 09:07:52.867922 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:52 crc kubenswrapper[4720]: E0202 09:07:52.888827 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:54 crc kubenswrapper[4720]: E0202 09:07:54.048357 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:54 crc kubenswrapper[4720]: E0202 09:07:54.070637 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:55 crc kubenswrapper[4720]: E0202 09:07:55.214227 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:55 crc kubenswrapper[4720]: E0202 09:07:55.235006 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:56 crc kubenswrapper[4720]: E0202 09:07:56.383398 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:56 crc kubenswrapper[4720]: E0202 09:07:56.407124 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:57 crc kubenswrapper[4720]: E0202 09:07:57.611643 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:57 crc kubenswrapper[4720]: E0202 09:07:57.633318 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:58 crc kubenswrapper[4720]: E0202 09:07:58.794358 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:58 crc kubenswrapper[4720]: E0202 09:07:58.817076 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:07:59 crc kubenswrapper[4720]: E0202 09:07:59.980377 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:00 crc kubenswrapper[4720]: E0202 09:08:00.004146 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:01 crc kubenswrapper[4720]: E0202 09:08:01.168931 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:01 crc kubenswrapper[4720]: E0202 09:08:01.189833 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:02 crc kubenswrapper[4720]: E0202 09:08:02.347792 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:02 crc kubenswrapper[4720]: E0202 09:08:02.364893 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:03 crc kubenswrapper[4720]: E0202 09:08:03.540425 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:03 crc kubenswrapper[4720]: E0202 09:08:03.562272 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:04 crc kubenswrapper[4720]: E0202 09:08:04.766582 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:04 crc kubenswrapper[4720]: E0202 09:08:04.790747 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:05 crc kubenswrapper[4720]: E0202 09:08:05.961543 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:05 crc kubenswrapper[4720]: E0202 09:08:05.980128 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:07 crc kubenswrapper[4720]: E0202 09:08:07.159696 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:07 crc kubenswrapper[4720]: E0202 09:08:07.181298 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:07 crc kubenswrapper[4720]: I0202 09:08:07.407140 4720 scope.go:117] "RemoveContainer" containerID="3d9bc388423eadafec6e2ca632985ae414e587d39559a996f1a57d6380e878d7" Feb 02 09:08:08 crc kubenswrapper[4720]: E0202 09:08:08.322082 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:08 crc kubenswrapper[4720]: E0202 09:08:08.344853 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:09 crc kubenswrapper[4720]: E0202 09:08:09.532403 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:09 crc kubenswrapper[4720]: E0202 09:08:09.556945 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:10 crc kubenswrapper[4720]: E0202 09:08:10.740500 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:10 crc kubenswrapper[4720]: E0202 09:08:10.773238 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:11 crc kubenswrapper[4720]: E0202 09:08:11.951017 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:11 crc kubenswrapper[4720]: E0202 09:08:11.974036 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:13 crc kubenswrapper[4720]: E0202 09:08:13.181368 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:13 crc kubenswrapper[4720]: E0202 09:08:13.205976 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:14 crc kubenswrapper[4720]: E0202 09:08:14.375812 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:14 crc kubenswrapper[4720]: E0202 09:08:14.398186 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:15 crc kubenswrapper[4720]: E0202 09:08:15.555125 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:15 crc kubenswrapper[4720]: E0202 09:08:15.578598 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:16 crc kubenswrapper[4720]: E0202 09:08:16.721195 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:16 crc kubenswrapper[4720]: E0202 09:08:16.740559 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:17 crc kubenswrapper[4720]: E0202 09:08:17.912019 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:17 crc kubenswrapper[4720]: E0202 09:08:17.935441 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:19 crc kubenswrapper[4720]: E0202 09:08:19.090604 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:19 crc kubenswrapper[4720]: E0202 09:08:19.108831 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:20 crc kubenswrapper[4720]: E0202 09:08:20.262307 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:20 crc kubenswrapper[4720]: E0202 09:08:20.285553 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:21 crc kubenswrapper[4720]: E0202 09:08:21.437355 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:21 crc kubenswrapper[4720]: E0202 09:08:21.458968 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:22 crc kubenswrapper[4720]: E0202 09:08:22.637642 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:22 crc kubenswrapper[4720]: E0202 09:08:22.660341 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:23 crc kubenswrapper[4720]: E0202 09:08:23.820863 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:23 crc kubenswrapper[4720]: E0202 09:08:23.843187 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:25 crc kubenswrapper[4720]: E0202 09:08:25.010131 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:25 crc kubenswrapper[4720]: E0202 09:08:25.028142 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:26 crc kubenswrapper[4720]: E0202 09:08:26.181211 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:26 crc kubenswrapper[4720]: E0202 09:08:26.201577 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:27 crc kubenswrapper[4720]: E0202 09:08:27.401245 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:27 crc kubenswrapper[4720]: E0202 09:08:27.422627 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:28 crc kubenswrapper[4720]: E0202 09:08:28.568462 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:28 crc kubenswrapper[4720]: E0202 09:08:28.591003 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:29 crc kubenswrapper[4720]: E0202 09:08:29.750609 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:29 crc kubenswrapper[4720]: E0202 09:08:29.771750 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:30 crc kubenswrapper[4720]: E0202 09:08:30.920640 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:30 crc kubenswrapper[4720]: E0202 09:08:30.941352 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:32 crc kubenswrapper[4720]: E0202 09:08:32.085936 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:32 crc kubenswrapper[4720]: E0202 09:08:32.101779 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:33 crc kubenswrapper[4720]: E0202 09:08:33.248929 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:33 crc kubenswrapper[4720]: E0202 09:08:33.270644 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:34 crc kubenswrapper[4720]: E0202 09:08:34.414928 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:34 crc kubenswrapper[4720]: E0202 09:08:34.432692 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:35 crc kubenswrapper[4720]: E0202 09:08:35.595959 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:35 crc kubenswrapper[4720]: E0202 09:08:35.624723 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:36 crc kubenswrapper[4720]: E0202 09:08:36.805454 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:36 crc kubenswrapper[4720]: E0202 09:08:36.828191 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:37 crc kubenswrapper[4720]: E0202 09:08:37.982207 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:38 crc kubenswrapper[4720]: E0202 09:08:38.002553 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:39 crc kubenswrapper[4720]: E0202 09:08:39.155212 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:39 crc kubenswrapper[4720]: E0202 09:08:39.179966 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:40 crc kubenswrapper[4720]: E0202 09:08:40.397464 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:40 crc kubenswrapper[4720]: E0202 09:08:40.413518 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:41 crc kubenswrapper[4720]: E0202 09:08:41.568948 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:41 crc kubenswrapper[4720]: E0202 09:08:41.582225 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:42 crc kubenswrapper[4720]: E0202 09:08:42.732304 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:42 crc kubenswrapper[4720]: E0202 09:08:42.756627 4720 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5680074553168411242, SKID=, AKID=D0:C3:2C:F9:CE:75:85:2B:05:98:F5:A2:04:C0:75:1F:73:1C:CA:9D failed: x509: certificate signed by unknown authority" Feb 02 09:08:43 crc kubenswrapper[4720]: I0202 09:08:43.727853 4720 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 09:09:08 crc kubenswrapper[4720]: E0202 09:09:08.206107 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:49558->38.102.83.177:44747: write tcp 38.102.83.177:49558->38.102.83.177:44747: write: broken pipe Feb 02 09:09:17 crc kubenswrapper[4720]: I0202 09:09:17.901793 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:09:17 crc kubenswrapper[4720]: I0202 09:09:17.902629 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:09:31 crc kubenswrapper[4720]: I0202 09:09:31.871942 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l"] Feb 02 09:09:31 crc kubenswrapper[4720]: I0202 09:09:31.874020 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:31 crc kubenswrapper[4720]: I0202 09:09:31.875997 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 09:09:31 crc kubenswrapper[4720]: I0202 09:09:31.894726 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l"] Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.004632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.004694 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4r2q\" (UniqueName: \"kubernetes.io/projected/a38fc737-0c77-43c6-b94a-47d89c49d9c8-kube-api-access-c4r2q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.004945 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.106383 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.106482 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.106519 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4r2q\" (UniqueName: \"kubernetes.io/projected/a38fc737-0c77-43c6-b94a-47d89c49d9c8-kube-api-access-c4r2q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.107212 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.107258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.139327 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4r2q\" (UniqueName: \"kubernetes.io/projected/a38fc737-0c77-43c6-b94a-47d89c49d9c8-kube-api-access-c4r2q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.202137 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:32 crc kubenswrapper[4720]: I0202 09:09:32.676558 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l"] Feb 02 09:09:33 crc kubenswrapper[4720]: I0202 09:09:33.195631 4720 generic.go:334] "Generic (PLEG): container finished" podID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerID="1b9d547db8f4faba45b0a24bc55c9de83421ac2643c5c6b2d476cb656a27ee38" exitCode=0 Feb 02 09:09:33 crc kubenswrapper[4720]: I0202 09:09:33.195689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" event={"ID":"a38fc737-0c77-43c6-b94a-47d89c49d9c8","Type":"ContainerDied","Data":"1b9d547db8f4faba45b0a24bc55c9de83421ac2643c5c6b2d476cb656a27ee38"} Feb 02 09:09:33 crc kubenswrapper[4720]: I0202 09:09:33.195743 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" event={"ID":"a38fc737-0c77-43c6-b94a-47d89c49d9c8","Type":"ContainerStarted","Data":"56bab9a954e387405330792181352c7e9386e9cf9717f743a372bb6235e84ff4"} Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.168913 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dxrl"] Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.170284 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.195617 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dxrl"] Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.339085 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-catalog-content\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.339131 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-utilities\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.339237 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2fww\" (UniqueName: \"kubernetes.io/projected/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-kube-api-access-z2fww\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.440228 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-utilities\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.440328 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2fww\" (UniqueName: \"kubernetes.io/projected/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-kube-api-access-z2fww\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.440418 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-catalog-content\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.441040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-utilities\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.441206 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-catalog-content\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.471057 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2fww\" (UniqueName: \"kubernetes.io/projected/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-kube-api-access-z2fww\") pod \"redhat-operators-5dxrl\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.491240 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:34 crc kubenswrapper[4720]: I0202 09:09:34.727760 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dxrl"] Feb 02 09:09:35 crc kubenswrapper[4720]: I0202 09:09:35.209172 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" event={"ID":"a38fc737-0c77-43c6-b94a-47d89c49d9c8","Type":"ContainerStarted","Data":"fb4c1a8551aa0ccc54d457686a09672132121b4a8d1eebe4a4dc98297fcf7bdf"} Feb 02 09:09:35 crc kubenswrapper[4720]: I0202 09:09:35.210693 4720 generic.go:334] "Generic (PLEG): container finished" podID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerID="6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e" exitCode=0 Feb 02 09:09:35 crc kubenswrapper[4720]: I0202 09:09:35.210727 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dxrl" event={"ID":"8e91d70a-00d6-47c6-b776-8fdcc1c305d3","Type":"ContainerDied","Data":"6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e"} Feb 02 09:09:35 crc kubenswrapper[4720]: I0202 09:09:35.210743 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dxrl" event={"ID":"8e91d70a-00d6-47c6-b776-8fdcc1c305d3","Type":"ContainerStarted","Data":"07580224bed30a5cd11fd1add44a3a2bed981a2029eb651ef38616362572b030"} Feb 02 09:09:36 crc kubenswrapper[4720]: I0202 09:09:36.221044 4720 generic.go:334] "Generic (PLEG): container finished" podID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerID="fb4c1a8551aa0ccc54d457686a09672132121b4a8d1eebe4a4dc98297fcf7bdf" exitCode=0 Feb 02 09:09:36 crc kubenswrapper[4720]: I0202 09:09:36.221138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" event={"ID":"a38fc737-0c77-43c6-b94a-47d89c49d9c8","Type":"ContainerDied","Data":"fb4c1a8551aa0ccc54d457686a09672132121b4a8d1eebe4a4dc98297fcf7bdf"} Feb 02 09:09:37 crc kubenswrapper[4720]: I0202 09:09:37.233192 4720 generic.go:334] "Generic (PLEG): container finished" podID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerID="32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7" exitCode=0 Feb 02 09:09:37 crc kubenswrapper[4720]: I0202 09:09:37.233327 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dxrl" event={"ID":"8e91d70a-00d6-47c6-b776-8fdcc1c305d3","Type":"ContainerDied","Data":"32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7"} Feb 02 09:09:37 crc kubenswrapper[4720]: I0202 09:09:37.240125 4720 generic.go:334] "Generic (PLEG): container finished" podID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerID="5b7afe65772f04670f2c7a9f17cd5e2b68e8a182fbccc3f045cc673b276f3bb0" exitCode=0 Feb 02 09:09:37 crc kubenswrapper[4720]: I0202 09:09:37.240189 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" event={"ID":"a38fc737-0c77-43c6-b94a-47d89c49d9c8","Type":"ContainerDied","Data":"5b7afe65772f04670f2c7a9f17cd5e2b68e8a182fbccc3f045cc673b276f3bb0"} Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.251417 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dxrl" event={"ID":"8e91d70a-00d6-47c6-b776-8fdcc1c305d3","Type":"ContainerStarted","Data":"1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b"} Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.287059 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dxrl" podStartSLOduration=1.871858596 podStartE2EDuration="4.287041729s" podCreationTimestamp="2026-02-02 09:09:34 +0000 UTC" firstStartedPulling="2026-02-02 09:09:35.212691626 +0000 UTC m=+809.068317182" lastFinishedPulling="2026-02-02 09:09:37.627874719 +0000 UTC m=+811.483500315" observedRunningTime="2026-02-02 09:09:38.285350188 +0000 UTC m=+812.140975784" watchObservedRunningTime="2026-02-02 09:09:38.287041729 +0000 UTC m=+812.142667295" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.586308 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.711284 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4r2q\" (UniqueName: \"kubernetes.io/projected/a38fc737-0c77-43c6-b94a-47d89c49d9c8-kube-api-access-c4r2q\") pod \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.711358 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-util\") pod \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.711417 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-bundle\") pod \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\" (UID: \"a38fc737-0c77-43c6-b94a-47d89c49d9c8\") " Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.713528 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-bundle" (OuterVolumeSpecName: "bundle") pod "a38fc737-0c77-43c6-b94a-47d89c49d9c8" (UID: "a38fc737-0c77-43c6-b94a-47d89c49d9c8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.721503 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38fc737-0c77-43c6-b94a-47d89c49d9c8-kube-api-access-c4r2q" (OuterVolumeSpecName: "kube-api-access-c4r2q") pod "a38fc737-0c77-43c6-b94a-47d89c49d9c8" (UID: "a38fc737-0c77-43c6-b94a-47d89c49d9c8"). InnerVolumeSpecName "kube-api-access-c4r2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.743201 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-util" (OuterVolumeSpecName: "util") pod "a38fc737-0c77-43c6-b94a-47d89c49d9c8" (UID: "a38fc737-0c77-43c6-b94a-47d89c49d9c8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.812659 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-util\") on node \"crc\" DevicePath \"\"" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.812715 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a38fc737-0c77-43c6-b94a-47d89c49d9c8-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:09:38 crc kubenswrapper[4720]: I0202 09:09:38.812736 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4r2q\" (UniqueName: \"kubernetes.io/projected/a38fc737-0c77-43c6-b94a-47d89c49d9c8-kube-api-access-c4r2q\") on node \"crc\" DevicePath \"\"" Feb 02 09:09:39 crc kubenswrapper[4720]: I0202 09:09:39.262943 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" event={"ID":"a38fc737-0c77-43c6-b94a-47d89c49d9c8","Type":"ContainerDied","Data":"56bab9a954e387405330792181352c7e9386e9cf9717f743a372bb6235e84ff4"} Feb 02 09:09:39 crc kubenswrapper[4720]: I0202 09:09:39.263390 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56bab9a954e387405330792181352c7e9386e9cf9717f743a372bb6235e84ff4" Feb 02 09:09:39 crc kubenswrapper[4720]: I0202 09:09:39.262957 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.065901 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cvrx5"] Feb 02 09:09:42 crc kubenswrapper[4720]: E0202 09:09:42.066346 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="extract" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.066361 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="extract" Feb 02 09:09:42 crc kubenswrapper[4720]: E0202 09:09:42.066380 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="util" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.066387 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="util" Feb 02 09:09:42 crc kubenswrapper[4720]: E0202 09:09:42.066397 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="pull" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.066402 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="pull" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.066485 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38fc737-0c77-43c6-b94a-47d89c49d9c8" containerName="extract" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.066839 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.069859 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.069961 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.070011 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cp8n5" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.078945 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cvrx5"] Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.256801 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvpz5\" (UniqueName: \"kubernetes.io/projected/0438f049-5f34-4bfa-8491-8477d69b7f3d-kube-api-access-jvpz5\") pod \"nmstate-operator-646758c888-cvrx5\" (UID: \"0438f049-5f34-4bfa-8491-8477d69b7f3d\") " pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.357593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvpz5\" (UniqueName: \"kubernetes.io/projected/0438f049-5f34-4bfa-8491-8477d69b7f3d-kube-api-access-jvpz5\") pod \"nmstate-operator-646758c888-cvrx5\" (UID: \"0438f049-5f34-4bfa-8491-8477d69b7f3d\") " pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.392771 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvpz5\" (UniqueName: \"kubernetes.io/projected/0438f049-5f34-4bfa-8491-8477d69b7f3d-kube-api-access-jvpz5\") pod \"nmstate-operator-646758c888-cvrx5\" (UID: \"0438f049-5f34-4bfa-8491-8477d69b7f3d\") " pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.415788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" Feb 02 09:09:42 crc kubenswrapper[4720]: I0202 09:09:42.684308 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cvrx5"] Feb 02 09:09:42 crc kubenswrapper[4720]: W0202 09:09:42.690584 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0438f049_5f34_4bfa_8491_8477d69b7f3d.slice/crio-79eb5361da5f3ddf869da9869e5de985b6a763ae8eec2554e1094ed4092e0a4a WatchSource:0}: Error finding container 79eb5361da5f3ddf869da9869e5de985b6a763ae8eec2554e1094ed4092e0a4a: Status 404 returned error can't find the container with id 79eb5361da5f3ddf869da9869e5de985b6a763ae8eec2554e1094ed4092e0a4a Feb 02 09:09:43 crc kubenswrapper[4720]: I0202 09:09:43.289751 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" event={"ID":"0438f049-5f34-4bfa-8491-8477d69b7f3d","Type":"ContainerStarted","Data":"79eb5361da5f3ddf869da9869e5de985b6a763ae8eec2554e1094ed4092e0a4a"} Feb 02 09:09:44 crc kubenswrapper[4720]: I0202 09:09:44.491782 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:44 crc kubenswrapper[4720]: I0202 09:09:44.491851 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:45 crc kubenswrapper[4720]: I0202 09:09:45.558253 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5dxrl" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="registry-server" probeResult="failure" output=< Feb 02 09:09:45 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:09:45 crc kubenswrapper[4720]: > Feb 02 09:09:47 crc kubenswrapper[4720]: I0202 09:09:47.901950 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:09:47 crc kubenswrapper[4720]: I0202 09:09:47.902043 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:09:52 crc kubenswrapper[4720]: I0202 09:09:52.354264 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" event={"ID":"0438f049-5f34-4bfa-8491-8477d69b7f3d","Type":"ContainerStarted","Data":"8bcd2408cebda88f2491c92ba5bf5ed156c3593b7b7f178d93798d0064047323"} Feb 02 09:09:52 crc kubenswrapper[4720]: I0202 09:09:52.393556 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-cvrx5" podStartSLOduration=1.837927601 podStartE2EDuration="10.393523101s" podCreationTimestamp="2026-02-02 09:09:42 +0000 UTC" firstStartedPulling="2026-02-02 09:09:42.693435931 +0000 UTC m=+816.549061497" lastFinishedPulling="2026-02-02 09:09:51.249031441 +0000 UTC m=+825.104656997" observedRunningTime="2026-02-02 09:09:52.377930297 +0000 UTC m=+826.233555923" watchObservedRunningTime="2026-02-02 09:09:52.393523101 +0000 UTC m=+826.249148697" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.302420 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4lrz9"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.303441 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.305161 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5pzrn" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.319937 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4lrz9"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.324489 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-zb6sf"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.325559 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.328826 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.329567 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.331068 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.388342 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.414261 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6tn\" (UniqueName: \"kubernetes.io/projected/78d2d6b5-0eef-4124-946c-e987ac1fbb95-kube-api-access-fr6tn\") pod \"nmstate-metrics-54757c584b-4lrz9\" (UID: \"78d2d6b5-0eef-4124-946c-e987ac1fbb95\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.449656 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.450644 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.458487 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.460115 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.460544 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h58ln" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.479002 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.515407 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-ovs-socket\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.515463 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e3f926bb-69d8-493a-82e3-93bb3c1446b0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dqxbg\" (UID: \"e3f926bb-69d8-493a-82e3-93bb3c1446b0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.515496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr6tn\" (UniqueName: \"kubernetes.io/projected/78d2d6b5-0eef-4124-946c-e987ac1fbb95-kube-api-access-fr6tn\") pod \"nmstate-metrics-54757c584b-4lrz9\" (UID: \"78d2d6b5-0eef-4124-946c-e987ac1fbb95\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.515673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6hj\" (UniqueName: \"kubernetes.io/projected/09bfc10e-1726-4216-9abf-9f8f17521be8-kube-api-access-7x6hj\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.515766 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dc2\" (UniqueName: \"kubernetes.io/projected/e3f926bb-69d8-493a-82e3-93bb3c1446b0-kube-api-access-p6dc2\") pod \"nmstate-webhook-8474b5b9d8-dqxbg\" (UID: \"e3f926bb-69d8-493a-82e3-93bb3c1446b0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.515977 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-nmstate-lock\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.516011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-dbus-socket\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.549328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr6tn\" (UniqueName: \"kubernetes.io/projected/78d2d6b5-0eef-4124-946c-e987ac1fbb95-kube-api-access-fr6tn\") pod \"nmstate-metrics-54757c584b-4lrz9\" (UID: \"78d2d6b5-0eef-4124-946c-e987ac1fbb95\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.620468 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6hj\" (UniqueName: \"kubernetes.io/projected/09bfc10e-1726-4216-9abf-9f8f17521be8-kube-api-access-7x6hj\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.620680 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmctc\" (UniqueName: \"kubernetes.io/projected/f088e9b1-46d0-4f11-a561-3bffd75fb297-kube-api-access-vmctc\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.620763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dc2\" (UniqueName: \"kubernetes.io/projected/e3f926bb-69d8-493a-82e3-93bb3c1446b0-kube-api-access-p6dc2\") pod \"nmstate-webhook-8474b5b9d8-dqxbg\" (UID: \"e3f926bb-69d8-493a-82e3-93bb3c1446b0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.620834 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-nmstate-lock\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.620963 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-dbus-socket\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.621040 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f088e9b1-46d0-4f11-a561-3bffd75fb297-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.620977 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-nmstate-lock\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.621114 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f088e9b1-46d0-4f11-a561-3bffd75fb297-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.621236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-ovs-socket\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.621385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e3f926bb-69d8-493a-82e3-93bb3c1446b0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dqxbg\" (UID: \"e3f926bb-69d8-493a-82e3-93bb3c1446b0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.621315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-dbus-socket\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.621340 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/09bfc10e-1726-4216-9abf-9f8f17521be8-ovs-socket\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.625034 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e3f926bb-69d8-493a-82e3-93bb3c1446b0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dqxbg\" (UID: \"e3f926bb-69d8-493a-82e3-93bb3c1446b0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.637304 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6hj\" (UniqueName: \"kubernetes.io/projected/09bfc10e-1726-4216-9abf-9f8f17521be8-kube-api-access-7x6hj\") pod \"nmstate-handler-zb6sf\" (UID: \"09bfc10e-1726-4216-9abf-9f8f17521be8\") " pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.637731 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dc2\" (UniqueName: \"kubernetes.io/projected/e3f926bb-69d8-493a-82e3-93bb3c1446b0-kube-api-access-p6dc2\") pod \"nmstate-webhook-8474b5b9d8-dqxbg\" (UID: \"e3f926bb-69d8-493a-82e3-93bb3c1446b0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.656524 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67bfdd788b-fplv9"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.657237 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.662461 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bfdd788b-fplv9"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.675372 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.690640 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.698699 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739316 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f088e9b1-46d0-4f11-a561-3bffd75fb297-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739350 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-config\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739371 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-trusted-ca-bundle\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739391 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-service-ca\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739410 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-oauth-config\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkxk\" (UniqueName: \"kubernetes.io/projected/13a3ccae-c083-43b8-8cf1-3756dd612fa2-kube-api-access-vzkxk\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739457 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-serving-cert\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739480 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmctc\" (UniqueName: \"kubernetes.io/projected/f088e9b1-46d0-4f11-a561-3bffd75fb297-kube-api-access-vmctc\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739516 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f088e9b1-46d0-4f11-a561-3bffd75fb297-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.739533 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-oauth-serving-cert\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.740442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f088e9b1-46d0-4f11-a561-3bffd75fb297-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.743776 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f088e9b1-46d0-4f11-a561-3bffd75fb297-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.767655 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmctc\" (UniqueName: \"kubernetes.io/projected/f088e9b1-46d0-4f11-a561-3bffd75fb297-kube-api-access-vmctc\") pod \"nmstate-console-plugin-7754f76f8b-tvcgq\" (UID: \"f088e9b1-46d0-4f11-a561-3bffd75fb297\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.785886 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-service-ca\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840315 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-oauth-config\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840351 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzkxk\" (UniqueName: \"kubernetes.io/projected/13a3ccae-c083-43b8-8cf1-3756dd612fa2-kube-api-access-vzkxk\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840374 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-serving-cert\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-oauth-serving-cert\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-config\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.840465 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-trusted-ca-bundle\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.842142 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-trusted-ca-bundle\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.842506 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-service-ca\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.842747 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-oauth-serving-cert\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.845496 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-config\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.845906 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-serving-cert\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.847816 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13a3ccae-c083-43b8-8cf1-3756dd612fa2-console-oauth-config\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.857572 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzkxk\" (UniqueName: \"kubernetes.io/projected/13a3ccae-c083-43b8-8cf1-3756dd612fa2-kube-api-access-vzkxk\") pod \"console-67bfdd788b-fplv9\" (UID: \"13a3ccae-c083-43b8-8cf1-3756dd612fa2\") " pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.900881 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4lrz9"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.944848 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg"] Feb 02 09:09:53 crc kubenswrapper[4720]: I0202 09:09:53.973450 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.008643 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq"] Feb 02 09:09:54 crc kubenswrapper[4720]: W0202 09:09:54.017565 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf088e9b1_46d0_4f11_a561_3bffd75fb297.slice/crio-a3a39d5b0e5c164817d07ca65cd49121b2cee8c108f41b13aaa76baf9fa636f2 WatchSource:0}: Error finding container a3a39d5b0e5c164817d07ca65cd49121b2cee8c108f41b13aaa76baf9fa636f2: Status 404 returned error can't find the container with id a3a39d5b0e5c164817d07ca65cd49121b2cee8c108f41b13aaa76baf9fa636f2 Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.138649 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bfdd788b-fplv9"] Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.369048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" event={"ID":"e3f926bb-69d8-493a-82e3-93bb3c1446b0","Type":"ContainerStarted","Data":"0ad68697e8d077a523690157f1831336a78ce2204e0b290aad461b42efb9d6fb"} Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.370331 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zb6sf" event={"ID":"09bfc10e-1726-4216-9abf-9f8f17521be8","Type":"ContainerStarted","Data":"1dde60dfda5460083d7751878dadf1d76fa6e128f26773c3e9155e7cb17bc390"} Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.371758 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" event={"ID":"78d2d6b5-0eef-4124-946c-e987ac1fbb95","Type":"ContainerStarted","Data":"1964c0f94456c859cb91c2e83c6688a02ce4eee620c145494f8cd53c4d2d6fd7"} Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.373745 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bfdd788b-fplv9" event={"ID":"13a3ccae-c083-43b8-8cf1-3756dd612fa2","Type":"ContainerStarted","Data":"2643fb5ced012d522a471b9790fddcd49d261b7be4d0567723814ef8673bfd98"} Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.373776 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bfdd788b-fplv9" event={"ID":"13a3ccae-c083-43b8-8cf1-3756dd612fa2","Type":"ContainerStarted","Data":"25ca1fa75c2af90171cfa0be6e9d4d964fda8c29beede7e63d33e7f2528a3619"} Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.375415 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" event={"ID":"f088e9b1-46d0-4f11-a561-3bffd75fb297","Type":"ContainerStarted","Data":"a3a39d5b0e5c164817d07ca65cd49121b2cee8c108f41b13aaa76baf9fa636f2"} Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.398242 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67bfdd788b-fplv9" podStartSLOduration=1.398223884 podStartE2EDuration="1.398223884s" podCreationTimestamp="2026-02-02 09:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:09:54.397529298 +0000 UTC m=+828.253154874" watchObservedRunningTime="2026-02-02 09:09:54.398223884 +0000 UTC m=+828.253849440" Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.553430 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.632629 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:54 crc kubenswrapper[4720]: I0202 09:09:54.806239 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dxrl"] Feb 02 09:09:56 crc kubenswrapper[4720]: I0202 09:09:56.390377 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dxrl" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="registry-server" containerID="cri-o://1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b" gracePeriod=2 Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.297909 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.396435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" event={"ID":"f088e9b1-46d0-4f11-a561-3bffd75fb297","Type":"ContainerStarted","Data":"61f3a6b58dfcdb08ccb95612f54a0a3289ed776f5c70dae7c99088adfce5de1c"} Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.402003 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" event={"ID":"e3f926bb-69d8-493a-82e3-93bb3c1446b0","Type":"ContainerStarted","Data":"235e39ade779e471682e6eb88a7a00cb6da1d791dfb5267bb46ec72b0aa85ccd"} Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.402143 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.404078 4720 generic.go:334] "Generic (PLEG): container finished" podID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerID="1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b" exitCode=0 Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.404125 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dxrl" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.404166 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dxrl" event={"ID":"8e91d70a-00d6-47c6-b776-8fdcc1c305d3","Type":"ContainerDied","Data":"1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b"} Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.404201 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dxrl" event={"ID":"8e91d70a-00d6-47c6-b776-8fdcc1c305d3","Type":"ContainerDied","Data":"07580224bed30a5cd11fd1add44a3a2bed981a2029eb651ef38616362572b030"} Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.404228 4720 scope.go:117] "RemoveContainer" containerID="1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.406027 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zb6sf" event={"ID":"09bfc10e-1726-4216-9abf-9f8f17521be8","Type":"ContainerStarted","Data":"e2042d582845d4d5e02ddc7c4735369e5d4e9d6c857efdb6bf71dc947a752902"} Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.406186 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.408666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" event={"ID":"78d2d6b5-0eef-4124-946c-e987ac1fbb95","Type":"ContainerStarted","Data":"ee8c3636ee1ce027f088b51025c74093bc67ca305239f4e63e365103f3cf2447"} Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.415738 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-tvcgq" podStartSLOduration=1.445807069 podStartE2EDuration="4.415719271s" podCreationTimestamp="2026-02-02 09:09:53 +0000 UTC" firstStartedPulling="2026-02-02 09:09:54.020743848 +0000 UTC m=+827.876369404" lastFinishedPulling="2026-02-02 09:09:56.990656 +0000 UTC m=+830.846281606" observedRunningTime="2026-02-02 09:09:57.411076628 +0000 UTC m=+831.266702204" watchObservedRunningTime="2026-02-02 09:09:57.415719271 +0000 UTC m=+831.271344837" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.431070 4720 scope.go:117] "RemoveContainer" containerID="32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.448666 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" podStartSLOduration=1.339884002 podStartE2EDuration="4.448643222s" podCreationTimestamp="2026-02-02 09:09:53 +0000 UTC" firstStartedPulling="2026-02-02 09:09:53.952555118 +0000 UTC m=+827.808180674" lastFinishedPulling="2026-02-02 09:09:57.061314338 +0000 UTC m=+830.916939894" observedRunningTime="2026-02-02 09:09:57.441678264 +0000 UTC m=+831.297303820" watchObservedRunningTime="2026-02-02 09:09:57.448643222 +0000 UTC m=+831.304268788" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.454395 4720 scope.go:117] "RemoveContainer" containerID="6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.470087 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-zb6sf" podStartSLOduration=1.222875428 podStartE2EDuration="4.470070957s" podCreationTimestamp="2026-02-02 09:09:53 +0000 UTC" firstStartedPulling="2026-02-02 09:09:53.748049011 +0000 UTC m=+827.603674567" lastFinishedPulling="2026-02-02 09:09:56.9952445 +0000 UTC m=+830.850870096" observedRunningTime="2026-02-02 09:09:57.464965835 +0000 UTC m=+831.320591411" watchObservedRunningTime="2026-02-02 09:09:57.470070957 +0000 UTC m=+831.325696513" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.476468 4720 scope.go:117] "RemoveContainer" containerID="1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b" Feb 02 09:09:57 crc kubenswrapper[4720]: E0202 09:09:57.478175 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b\": container with ID starting with 1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b not found: ID does not exist" containerID="1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.478208 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b"} err="failed to get container status \"1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b\": rpc error: code = NotFound desc = could not find container \"1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b\": container with ID starting with 1627e048d9fe245f52366527bbcfc528b40fe6360f20ad1474c639dc68f7c29b not found: ID does not exist" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.478229 4720 scope.go:117] "RemoveContainer" containerID="32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7" Feb 02 09:09:57 crc kubenswrapper[4720]: E0202 09:09:57.479623 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7\": container with ID starting with 32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7 not found: ID does not exist" containerID="32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.479688 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7"} err="failed to get container status \"32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7\": rpc error: code = NotFound desc = could not find container \"32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7\": container with ID starting with 32e0a3835d8e459af972a4a51424a0e0d20d818788f539f27e2e52f4299e0bc7 not found: ID does not exist" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.479736 4720 scope.go:117] "RemoveContainer" containerID="6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e" Feb 02 09:09:57 crc kubenswrapper[4720]: E0202 09:09:57.480250 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e\": container with ID starting with 6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e not found: ID does not exist" containerID="6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.480284 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e"} err="failed to get container status \"6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e\": rpc error: code = NotFound desc = could not find container \"6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e\": container with ID starting with 6a303b7befaca41952f507708bd230a6d7c30808ab102743b98d5afc6b4cff6e not found: ID does not exist" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.489857 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2fww\" (UniqueName: \"kubernetes.io/projected/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-kube-api-access-z2fww\") pod \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.489985 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-catalog-content\") pod \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.490091 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-utilities\") pod \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\" (UID: \"8e91d70a-00d6-47c6-b776-8fdcc1c305d3\") " Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.490945 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-utilities" (OuterVolumeSpecName: "utilities") pod "8e91d70a-00d6-47c6-b776-8fdcc1c305d3" (UID: "8e91d70a-00d6-47c6-b776-8fdcc1c305d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.495219 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-kube-api-access-z2fww" (OuterVolumeSpecName: "kube-api-access-z2fww") pod "8e91d70a-00d6-47c6-b776-8fdcc1c305d3" (UID: "8e91d70a-00d6-47c6-b776-8fdcc1c305d3"). InnerVolumeSpecName "kube-api-access-z2fww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.592253 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2fww\" (UniqueName: \"kubernetes.io/projected/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-kube-api-access-z2fww\") on node \"crc\" DevicePath \"\"" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.592281 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.612602 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e91d70a-00d6-47c6-b776-8fdcc1c305d3" (UID: "8e91d70a-00d6-47c6-b776-8fdcc1c305d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.693764 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e91d70a-00d6-47c6-b776-8fdcc1c305d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.736626 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dxrl"] Feb 02 09:09:57 crc kubenswrapper[4720]: I0202 09:09:57.742577 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dxrl"] Feb 02 09:09:58 crc kubenswrapper[4720]: I0202 09:09:58.897519 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" path="/var/lib/kubelet/pods/8e91d70a-00d6-47c6-b776-8fdcc1c305d3/volumes" Feb 02 09:10:00 crc kubenswrapper[4720]: I0202 09:10:00.434676 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" event={"ID":"78d2d6b5-0eef-4124-946c-e987ac1fbb95","Type":"ContainerStarted","Data":"ad9fd14f8e16edc1c9b07ae3edeb0a0b20b44b933c0a8c61090f90ae6f8588c7"} Feb 02 09:10:00 crc kubenswrapper[4720]: I0202 09:10:00.467375 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-4lrz9" podStartSLOduration=1.710916262 podStartE2EDuration="7.467342495s" podCreationTimestamp="2026-02-02 09:09:53 +0000 UTC" firstStartedPulling="2026-02-02 09:09:53.906831648 +0000 UTC m=+827.762457204" lastFinishedPulling="2026-02-02 09:09:59.663257881 +0000 UTC m=+833.518883437" observedRunningTime="2026-02-02 09:10:00.462855117 +0000 UTC m=+834.318480703" watchObservedRunningTime="2026-02-02 09:10:00.467342495 +0000 UTC m=+834.322968111" Feb 02 09:10:03 crc kubenswrapper[4720]: I0202 09:10:03.721642 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-zb6sf" Feb 02 09:10:03 crc kubenswrapper[4720]: I0202 09:10:03.974970 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:10:03 crc kubenswrapper[4720]: I0202 09:10:03.975087 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:10:03 crc kubenswrapper[4720]: I0202 09:10:03.982214 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:10:04 crc kubenswrapper[4720]: I0202 09:10:04.470676 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67bfdd788b-fplv9" Feb 02 09:10:04 crc kubenswrapper[4720]: I0202 09:10:04.584711 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9pczc"] Feb 02 09:10:13 crc kubenswrapper[4720]: I0202 09:10:13.707379 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dqxbg" Feb 02 09:10:17 crc kubenswrapper[4720]: I0202 09:10:17.902277 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:10:17 crc kubenswrapper[4720]: I0202 09:10:17.902708 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:10:17 crc kubenswrapper[4720]: I0202 09:10:17.902772 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:10:17 crc kubenswrapper[4720]: I0202 09:10:17.903604 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08c9c2a3c22cfda2f1813f2d513a474efb4e6630c2e4cb574188e46dafd49a3d"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:10:17 crc kubenswrapper[4720]: I0202 09:10:17.903690 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://08c9c2a3c22cfda2f1813f2d513a474efb4e6630c2e4cb574188e46dafd49a3d" gracePeriod=600 Feb 02 09:10:18 crc kubenswrapper[4720]: I0202 09:10:18.565578 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="08c9c2a3c22cfda2f1813f2d513a474efb4e6630c2e4cb574188e46dafd49a3d" exitCode=0 Feb 02 09:10:18 crc kubenswrapper[4720]: I0202 09:10:18.565669 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"08c9c2a3c22cfda2f1813f2d513a474efb4e6630c2e4cb574188e46dafd49a3d"} Feb 02 09:10:18 crc kubenswrapper[4720]: I0202 09:10:18.566000 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"33b2587eeea210938842b756c82dc97d447412bb2884bc249a32550e7a5523ff"} Feb 02 09:10:18 crc kubenswrapper[4720]: I0202 09:10:18.566368 4720 scope.go:117] "RemoveContainer" containerID="521bb730c5dd5e002a88325f9a8d584eba2dba3a48c825467f6b1fc97674c2ae" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.606264 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9"] Feb 02 09:10:28 crc kubenswrapper[4720]: E0202 09:10:28.607093 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="registry-server" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.607114 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="registry-server" Feb 02 09:10:28 crc kubenswrapper[4720]: E0202 09:10:28.607138 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="extract-utilities" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.607150 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="extract-utilities" Feb 02 09:10:28 crc kubenswrapper[4720]: E0202 09:10:28.607175 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="extract-content" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.607187 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="extract-content" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.607339 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e91d70a-00d6-47c6-b776-8fdcc1c305d3" containerName="registry-server" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.608418 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.615525 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.620056 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9"] Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.626429 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.626521 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmd5h\" (UniqueName: \"kubernetes.io/projected/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-kube-api-access-jmd5h\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.626566 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.727744 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmd5h\" (UniqueName: \"kubernetes.io/projected/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-kube-api-access-jmd5h\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.727798 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.728262 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.729342 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.729605 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.748102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmd5h\" (UniqueName: \"kubernetes.io/projected/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-kube-api-access-jmd5h\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:28 crc kubenswrapper[4720]: I0202 09:10:28.932030 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:29 crc kubenswrapper[4720]: I0202 09:10:29.360841 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9"] Feb 02 09:10:29 crc kubenswrapper[4720]: I0202 09:10:29.639038 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9pczc" podUID="9e7ec368-a244-4b1c-a313-987332c21d0e" containerName="console" containerID="cri-o://1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e" gracePeriod=15 Feb 02 09:10:29 crc kubenswrapper[4720]: I0202 09:10:29.645649 4720 generic.go:334] "Generic (PLEG): container finished" podID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerID="decc215068561f7affbf1f00b7635ee0e49b925ea7ff01c428015cb1a7ea6f1c" exitCode=0 Feb 02 09:10:29 crc kubenswrapper[4720]: I0202 09:10:29.645689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" event={"ID":"e0cc5d55-a9c9-4d80-9650-7eab31776b2c","Type":"ContainerDied","Data":"decc215068561f7affbf1f00b7635ee0e49b925ea7ff01c428015cb1a7ea6f1c"} Feb 02 09:10:29 crc kubenswrapper[4720]: I0202 09:10:29.645711 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" event={"ID":"e0cc5d55-a9c9-4d80-9650-7eab31776b2c","Type":"ContainerStarted","Data":"0083e9e86ea8f0569cf806f41708ec5ea11fb5e01ae4b9bf3590b1405fd6c727"} Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.124126 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9pczc_9e7ec368-a244-4b1c-a313-987332c21d0e/console/0.log" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.124432 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247286 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-oauth-serving-cert\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247360 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-console-config\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247418 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-serving-cert\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247477 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-trusted-ca-bundle\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247527 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjtpq\" (UniqueName: \"kubernetes.io/projected/9e7ec368-a244-4b1c-a313-987332c21d0e-kube-api-access-xjtpq\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-service-ca\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.247619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-oauth-config\") pod \"9e7ec368-a244-4b1c-a313-987332c21d0e\" (UID: \"9e7ec368-a244-4b1c-a313-987332c21d0e\") " Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.248012 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-console-config" (OuterVolumeSpecName: "console-config") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.248401 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-service-ca" (OuterVolumeSpecName: "service-ca") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.248627 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.249762 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.254475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7ec368-a244-4b1c-a313-987332c21d0e-kube-api-access-xjtpq" (OuterVolumeSpecName: "kube-api-access-xjtpq") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "kube-api-access-xjtpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.257208 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.257521 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9e7ec368-a244-4b1c-a313-987332c21d0e" (UID: "9e7ec368-a244-4b1c-a313-987332c21d0e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349314 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349372 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjtpq\" (UniqueName: \"kubernetes.io/projected/9e7ec368-a244-4b1c-a313-987332c21d0e-kube-api-access-xjtpq\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349397 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349414 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349432 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349448 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e7ec368-a244-4b1c-a313-987332c21d0e-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.349464 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e7ec368-a244-4b1c-a313-987332c21d0e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.655136 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9pczc_9e7ec368-a244-4b1c-a313-987332c21d0e/console/0.log" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.655915 4720 generic.go:334] "Generic (PLEG): container finished" podID="9e7ec368-a244-4b1c-a313-987332c21d0e" containerID="1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e" exitCode=2 Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.656010 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9pczc" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.655981 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9pczc" event={"ID":"9e7ec368-a244-4b1c-a313-987332c21d0e","Type":"ContainerDied","Data":"1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e"} Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.656101 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9pczc" event={"ID":"9e7ec368-a244-4b1c-a313-987332c21d0e","Type":"ContainerDied","Data":"2108410cbc51b55fba7e66006340454bd6c71639a898b8e1ff47917a0a0c3db9"} Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.656131 4720 scope.go:117] "RemoveContainer" containerID="1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.693016 4720 scope.go:117] "RemoveContainer" containerID="1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e" Feb 02 09:10:30 crc kubenswrapper[4720]: E0202 09:10:30.693654 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e\": container with ID starting with 1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e not found: ID does not exist" containerID="1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.693710 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e"} err="failed to get container status \"1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e\": rpc error: code = NotFound desc = could not find container \"1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e\": container with ID starting with 1aaf9485f8ad2c31fbd3165efbf673df059235732c803ae5cba33b21fabd3f2e not found: ID does not exist" Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.724086 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9pczc"] Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.731217 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9pczc"] Feb 02 09:10:30 crc kubenswrapper[4720]: I0202 09:10:30.898709 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7ec368-a244-4b1c-a313-987332c21d0e" path="/var/lib/kubelet/pods/9e7ec368-a244-4b1c-a313-987332c21d0e/volumes" Feb 02 09:10:31 crc kubenswrapper[4720]: I0202 09:10:31.665151 4720 generic.go:334] "Generic (PLEG): container finished" podID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerID="49ee7452ccaf400135c56cebf9de0fd48a570d658b3ccfb39c970384574266b3" exitCode=0 Feb 02 09:10:31 crc kubenswrapper[4720]: I0202 09:10:31.665402 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" event={"ID":"e0cc5d55-a9c9-4d80-9650-7eab31776b2c","Type":"ContainerDied","Data":"49ee7452ccaf400135c56cebf9de0fd48a570d658b3ccfb39c970384574266b3"} Feb 02 09:10:32 crc kubenswrapper[4720]: I0202 09:10:32.679119 4720 generic.go:334] "Generic (PLEG): container finished" podID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerID="ec50a8ff9f0e9bf878969cce622728946dcb90f17a394724c98d62d957825c4b" exitCode=0 Feb 02 09:10:32 crc kubenswrapper[4720]: I0202 09:10:32.679180 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" event={"ID":"e0cc5d55-a9c9-4d80-9650-7eab31776b2c","Type":"ContainerDied","Data":"ec50a8ff9f0e9bf878969cce622728946dcb90f17a394724c98d62d957825c4b"} Feb 02 09:10:33 crc kubenswrapper[4720]: I0202 09:10:33.979048 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.004187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-bundle\") pod \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.004246 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmd5h\" (UniqueName: \"kubernetes.io/projected/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-kube-api-access-jmd5h\") pod \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.004282 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-util\") pod \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\" (UID: \"e0cc5d55-a9c9-4d80-9650-7eab31776b2c\") " Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.006751 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-bundle" (OuterVolumeSpecName: "bundle") pod "e0cc5d55-a9c9-4d80-9650-7eab31776b2c" (UID: "e0cc5d55-a9c9-4d80-9650-7eab31776b2c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.012027 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-kube-api-access-jmd5h" (OuterVolumeSpecName: "kube-api-access-jmd5h") pod "e0cc5d55-a9c9-4d80-9650-7eab31776b2c" (UID: "e0cc5d55-a9c9-4d80-9650-7eab31776b2c"). InnerVolumeSpecName "kube-api-access-jmd5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.034231 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-util" (OuterVolumeSpecName: "util") pod "e0cc5d55-a9c9-4d80-9650-7eab31776b2c" (UID: "e0cc5d55-a9c9-4d80-9650-7eab31776b2c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.105530 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmd5h\" (UniqueName: \"kubernetes.io/projected/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-kube-api-access-jmd5h\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.105579 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-util\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.105599 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0cc5d55-a9c9-4d80-9650-7eab31776b2c-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.699543 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" event={"ID":"e0cc5d55-a9c9-4d80-9650-7eab31776b2c","Type":"ContainerDied","Data":"0083e9e86ea8f0569cf806f41708ec5ea11fb5e01ae4b9bf3590b1405fd6c727"} Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.699601 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0083e9e86ea8f0569cf806f41708ec5ea11fb5e01ae4b9bf3590b1405fd6c727" Feb 02 09:10:34 crc kubenswrapper[4720]: I0202 09:10:34.699729 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179027 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d"] Feb 02 09:10:43 crc kubenswrapper[4720]: E0202 09:10:43.179748 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="extract" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179764 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="extract" Feb 02 09:10:43 crc kubenswrapper[4720]: E0202 09:10:43.179777 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="pull" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179783 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="pull" Feb 02 09:10:43 crc kubenswrapper[4720]: E0202 09:10:43.179792 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="util" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179798 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="util" Feb 02 09:10:43 crc kubenswrapper[4720]: E0202 09:10:43.179811 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7ec368-a244-4b1c-a313-987332c21d0e" containerName="console" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179817 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7ec368-a244-4b1c-a313-987332c21d0e" containerName="console" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179943 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7ec368-a244-4b1c-a313-987332c21d0e" containerName="console" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.179955 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cc5d55-a9c9-4d80-9650-7eab31776b2c" containerName="extract" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.180295 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.182453 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.182941 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.183603 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4tcrz" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.183634 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.185560 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.202767 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d"] Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.243445 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgh8g\" (UniqueName: \"kubernetes.io/projected/44b51ab2-e087-4a71-84cd-99575451219a-kube-api-access-dgh8g\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.243553 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44b51ab2-e087-4a71-84cd-99575451219a-apiservice-cert\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.243593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44b51ab2-e087-4a71-84cd-99575451219a-webhook-cert\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.344188 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44b51ab2-e087-4a71-84cd-99575451219a-apiservice-cert\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.344236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44b51ab2-e087-4a71-84cd-99575451219a-webhook-cert\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.344274 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgh8g\" (UniqueName: \"kubernetes.io/projected/44b51ab2-e087-4a71-84cd-99575451219a-kube-api-access-dgh8g\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.350073 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44b51ab2-e087-4a71-84cd-99575451219a-webhook-cert\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.362569 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44b51ab2-e087-4a71-84cd-99575451219a-apiservice-cert\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.364714 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgh8g\" (UniqueName: \"kubernetes.io/projected/44b51ab2-e087-4a71-84cd-99575451219a-kube-api-access-dgh8g\") pod \"metallb-operator-controller-manager-86589bcccc-n9w8d\" (UID: \"44b51ab2-e087-4a71-84cd-99575451219a\") " pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.495975 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.510068 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7648947864-tvl8z"] Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.510951 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.512701 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.516041 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-f8tbk" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.517558 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.535606 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7648947864-tvl8z"] Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.647982 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28f5dbe0-77e9-47e8-bf43-207d6467558d-apiservice-cert\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.648442 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f5dbe0-77e9-47e8-bf43-207d6467558d-webhook-cert\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.648466 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqtb\" (UniqueName: \"kubernetes.io/projected/28f5dbe0-77e9-47e8-bf43-207d6467558d-kube-api-access-cjqtb\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.749133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28f5dbe0-77e9-47e8-bf43-207d6467558d-apiservice-cert\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.749197 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f5dbe0-77e9-47e8-bf43-207d6467558d-webhook-cert\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.749220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjqtb\" (UniqueName: \"kubernetes.io/projected/28f5dbe0-77e9-47e8-bf43-207d6467558d-kube-api-access-cjqtb\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.756866 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28f5dbe0-77e9-47e8-bf43-207d6467558d-apiservice-cert\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.759333 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f5dbe0-77e9-47e8-bf43-207d6467558d-webhook-cert\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.785762 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjqtb\" (UniqueName: \"kubernetes.io/projected/28f5dbe0-77e9-47e8-bf43-207d6467558d-kube-api-access-cjqtb\") pod \"metallb-operator-webhook-server-7648947864-tvl8z\" (UID: \"28f5dbe0-77e9-47e8-bf43-207d6467558d\") " pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.888227 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:43 crc kubenswrapper[4720]: I0202 09:10:43.957108 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d"] Feb 02 09:10:43 crc kubenswrapper[4720]: W0202 09:10:43.964750 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44b51ab2_e087_4a71_84cd_99575451219a.slice/crio-a1abd8887aea5a0dc8be19f291a565e86858d83db0080e0ecda53dc7882c7400 WatchSource:0}: Error finding container a1abd8887aea5a0dc8be19f291a565e86858d83db0080e0ecda53dc7882c7400: Status 404 returned error can't find the container with id a1abd8887aea5a0dc8be19f291a565e86858d83db0080e0ecda53dc7882c7400 Feb 02 09:10:44 crc kubenswrapper[4720]: I0202 09:10:44.104303 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7648947864-tvl8z"] Feb 02 09:10:44 crc kubenswrapper[4720]: W0202 09:10:44.114187 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f5dbe0_77e9_47e8_bf43_207d6467558d.slice/crio-ba0dc17643aab5715a3a59b639df05f4f5f94f07025bb653b171b92531824d82 WatchSource:0}: Error finding container ba0dc17643aab5715a3a59b639df05f4f5f94f07025bb653b171b92531824d82: Status 404 returned error can't find the container with id ba0dc17643aab5715a3a59b639df05f4f5f94f07025bb653b171b92531824d82 Feb 02 09:10:44 crc kubenswrapper[4720]: I0202 09:10:44.778440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" event={"ID":"44b51ab2-e087-4a71-84cd-99575451219a","Type":"ContainerStarted","Data":"a1abd8887aea5a0dc8be19f291a565e86858d83db0080e0ecda53dc7882c7400"} Feb 02 09:10:44 crc kubenswrapper[4720]: I0202 09:10:44.780217 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" event={"ID":"28f5dbe0-77e9-47e8-bf43-207d6467558d","Type":"ContainerStarted","Data":"ba0dc17643aab5715a3a59b639df05f4f5f94f07025bb653b171b92531824d82"} Feb 02 09:10:48 crc kubenswrapper[4720]: I0202 09:10:48.818612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" event={"ID":"28f5dbe0-77e9-47e8-bf43-207d6467558d","Type":"ContainerStarted","Data":"601ae0173a2b38e9752b3c8b52891a50826cd51233e735cc5477912062e4e2e9"} Feb 02 09:10:48 crc kubenswrapper[4720]: I0202 09:10:48.819266 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:10:48 crc kubenswrapper[4720]: I0202 09:10:48.819799 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" event={"ID":"44b51ab2-e087-4a71-84cd-99575451219a","Type":"ContainerStarted","Data":"de314ce1181dce37e0a83f4b50f153407930e456b8f796243ea20818e64980e2"} Feb 02 09:10:48 crc kubenswrapper[4720]: I0202 09:10:48.820380 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:10:48 crc kubenswrapper[4720]: I0202 09:10:48.859351 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" podStartSLOduration=1.488426434 podStartE2EDuration="5.859328909s" podCreationTimestamp="2026-02-02 09:10:43 +0000 UTC" firstStartedPulling="2026-02-02 09:10:44.117270366 +0000 UTC m=+877.972895932" lastFinishedPulling="2026-02-02 09:10:48.488172851 +0000 UTC m=+882.343798407" observedRunningTime="2026-02-02 09:10:48.850837756 +0000 UTC m=+882.706463332" watchObservedRunningTime="2026-02-02 09:10:48.859328909 +0000 UTC m=+882.714954465" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.691870 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" podStartSLOduration=7.178113103 podStartE2EDuration="11.691849478s" podCreationTimestamp="2026-02-02 09:10:43 +0000 UTC" firstStartedPulling="2026-02-02 09:10:43.967707448 +0000 UTC m=+877.823333004" lastFinishedPulling="2026-02-02 09:10:48.481443823 +0000 UTC m=+882.337069379" observedRunningTime="2026-02-02 09:10:48.890870728 +0000 UTC m=+882.746496284" watchObservedRunningTime="2026-02-02 09:10:54.691849478 +0000 UTC m=+888.547475034" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.692590 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tq2xn"] Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.693939 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.708354 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tq2xn"] Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.798359 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-catalog-content\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.798421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-utilities\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.799168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghwkg\" (UniqueName: \"kubernetes.io/projected/a974c0d0-8070-4bb6-85e0-b7ca976c822e-kube-api-access-ghwkg\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.901301 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghwkg\" (UniqueName: \"kubernetes.io/projected/a974c0d0-8070-4bb6-85e0-b7ca976c822e-kube-api-access-ghwkg\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.901437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-catalog-content\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.901461 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-utilities\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.902091 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-utilities\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.902138 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-catalog-content\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:54 crc kubenswrapper[4720]: I0202 09:10:54.929249 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghwkg\" (UniqueName: \"kubernetes.io/projected/a974c0d0-8070-4bb6-85e0-b7ca976c822e-kube-api-access-ghwkg\") pod \"community-operators-tq2xn\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:55 crc kubenswrapper[4720]: I0202 09:10:55.024997 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:10:55 crc kubenswrapper[4720]: I0202 09:10:55.562498 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tq2xn"] Feb 02 09:10:55 crc kubenswrapper[4720]: I0202 09:10:55.865046 4720 generic.go:334] "Generic (PLEG): container finished" podID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerID="bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777" exitCode=0 Feb 02 09:10:55 crc kubenswrapper[4720]: I0202 09:10:55.865209 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerDied","Data":"bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777"} Feb 02 09:10:55 crc kubenswrapper[4720]: I0202 09:10:55.865318 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerStarted","Data":"44421c49dc8e0d143fe60caff9cc1b59f6793dfc1b3af79c3f896f2798e681c7"} Feb 02 09:10:56 crc kubenswrapper[4720]: I0202 09:10:56.875051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerStarted","Data":"2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c"} Feb 02 09:10:57 crc kubenswrapper[4720]: I0202 09:10:57.883356 4720 generic.go:334] "Generic (PLEG): container finished" podID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerID="2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c" exitCode=0 Feb 02 09:10:57 crc kubenswrapper[4720]: I0202 09:10:57.883419 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerDied","Data":"2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c"} Feb 02 09:10:58 crc kubenswrapper[4720]: I0202 09:10:58.899688 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerStarted","Data":"a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221"} Feb 02 09:10:58 crc kubenswrapper[4720]: I0202 09:10:58.920964 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tq2xn" podStartSLOduration=2.42866141 podStartE2EDuration="4.920945228s" podCreationTimestamp="2026-02-02 09:10:54 +0000 UTC" firstStartedPulling="2026-02-02 09:10:55.866434648 +0000 UTC m=+889.722060204" lastFinishedPulling="2026-02-02 09:10:58.358718466 +0000 UTC m=+892.214344022" observedRunningTime="2026-02-02 09:10:58.919687758 +0000 UTC m=+892.775313314" watchObservedRunningTime="2026-02-02 09:10:58.920945228 +0000 UTC m=+892.776570784" Feb 02 09:11:03 crc kubenswrapper[4720]: I0202 09:11:03.893984 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7648947864-tvl8z" Feb 02 09:11:05 crc kubenswrapper[4720]: I0202 09:11:05.026060 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:11:05 crc kubenswrapper[4720]: I0202 09:11:05.026330 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:11:05 crc kubenswrapper[4720]: I0202 09:11:05.068734 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:11:06 crc kubenswrapper[4720]: I0202 09:11:06.020451 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:11:06 crc kubenswrapper[4720]: I0202 09:11:06.064643 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tq2xn"] Feb 02 09:11:07 crc kubenswrapper[4720]: I0202 09:11:07.947932 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tq2xn" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="registry-server" containerID="cri-o://a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221" gracePeriod=2 Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.580698 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.776935 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-catalog-content\") pod \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.777035 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-utilities\") pod \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.777092 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghwkg\" (UniqueName: \"kubernetes.io/projected/a974c0d0-8070-4bb6-85e0-b7ca976c822e-kube-api-access-ghwkg\") pod \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\" (UID: \"a974c0d0-8070-4bb6-85e0-b7ca976c822e\") " Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.778868 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-utilities" (OuterVolumeSpecName: "utilities") pod "a974c0d0-8070-4bb6-85e0-b7ca976c822e" (UID: "a974c0d0-8070-4bb6-85e0-b7ca976c822e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.788181 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a974c0d0-8070-4bb6-85e0-b7ca976c822e-kube-api-access-ghwkg" (OuterVolumeSpecName: "kube-api-access-ghwkg") pod "a974c0d0-8070-4bb6-85e0-b7ca976c822e" (UID: "a974c0d0-8070-4bb6-85e0-b7ca976c822e"). InnerVolumeSpecName "kube-api-access-ghwkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.857044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a974c0d0-8070-4bb6-85e0-b7ca976c822e" (UID: "a974c0d0-8070-4bb6-85e0-b7ca976c822e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.879013 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghwkg\" (UniqueName: \"kubernetes.io/projected/a974c0d0-8070-4bb6-85e0-b7ca976c822e-kube-api-access-ghwkg\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.879067 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.879085 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a974c0d0-8070-4bb6-85e0-b7ca976c822e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.956853 4720 generic.go:334] "Generic (PLEG): container finished" podID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerID="a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221" exitCode=0 Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.956919 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerDied","Data":"a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221"} Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.956946 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tq2xn" event={"ID":"a974c0d0-8070-4bb6-85e0-b7ca976c822e","Type":"ContainerDied","Data":"44421c49dc8e0d143fe60caff9cc1b59f6793dfc1b3af79c3f896f2798e681c7"} Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.956950 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tq2xn" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.957055 4720 scope.go:117] "RemoveContainer" containerID="a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.979047 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tq2xn"] Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.982743 4720 scope.go:117] "RemoveContainer" containerID="2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c" Feb 02 09:11:08 crc kubenswrapper[4720]: I0202 09:11:08.987137 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tq2xn"] Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.004266 4720 scope.go:117] "RemoveContainer" containerID="bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.021745 4720 scope.go:117] "RemoveContainer" containerID="a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221" Feb 02 09:11:09 crc kubenswrapper[4720]: E0202 09:11:09.022163 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221\": container with ID starting with a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221 not found: ID does not exist" containerID="a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.022198 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221"} err="failed to get container status \"a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221\": rpc error: code = NotFound desc = could not find container \"a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221\": container with ID starting with a65eaf316db8573a4232faa8467c6e9b53b29c93a4295cab54b50d144f303221 not found: ID does not exist" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.022221 4720 scope.go:117] "RemoveContainer" containerID="2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c" Feb 02 09:11:09 crc kubenswrapper[4720]: E0202 09:11:09.022864 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c\": container with ID starting with 2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c not found: ID does not exist" containerID="2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.022896 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c"} err="failed to get container status \"2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c\": rpc error: code = NotFound desc = could not find container \"2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c\": container with ID starting with 2fbbdc7c79d81adb5820cde4772dd82d055844c30f57f6338851e8e540f3195c not found: ID does not exist" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.022910 4720 scope.go:117] "RemoveContainer" containerID="bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777" Feb 02 09:11:09 crc kubenswrapper[4720]: E0202 09:11:09.023224 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777\": container with ID starting with bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777 not found: ID does not exist" containerID="bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.023261 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777"} err="failed to get container status \"bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777\": rpc error: code = NotFound desc = could not find container \"bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777\": container with ID starting with bd0965400a089d3046e891de1637fa0c3d2849066e90948289898af0e6820777 not found: ID does not exist" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.717719 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nj7jq"] Feb 02 09:11:09 crc kubenswrapper[4720]: E0202 09:11:09.718151 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="extract-utilities" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.718162 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="extract-utilities" Feb 02 09:11:09 crc kubenswrapper[4720]: E0202 09:11:09.718174 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="extract-content" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.718179 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="extract-content" Feb 02 09:11:09 crc kubenswrapper[4720]: E0202 09:11:09.718197 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="registry-server" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.718203 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="registry-server" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.718726 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" containerName="registry-server" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.736187 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.747789 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj7jq"] Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.789868 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-utilities\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.790021 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdws\" (UniqueName: \"kubernetes.io/projected/a20e9298-d007-4ff6-b75e-cb95296f6937-kube-api-access-gwdws\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.790287 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-catalog-content\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.891216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-utilities\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.891290 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwdws\" (UniqueName: \"kubernetes.io/projected/a20e9298-d007-4ff6-b75e-cb95296f6937-kube-api-access-gwdws\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.891371 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-catalog-content\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.891702 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-utilities\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.891833 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-catalog-content\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:09 crc kubenswrapper[4720]: I0202 09:11:09.909111 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwdws\" (UniqueName: \"kubernetes.io/projected/a20e9298-d007-4ff6-b75e-cb95296f6937-kube-api-access-gwdws\") pod \"redhat-marketplace-nj7jq\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.111566 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.371578 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj7jq"] Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.896469 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a974c0d0-8070-4bb6-85e0-b7ca976c822e" path="/var/lib/kubelet/pods/a974c0d0-8070-4bb6-85e0-b7ca976c822e/volumes" Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.972536 4720 generic.go:334] "Generic (PLEG): container finished" podID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerID="5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e" exitCode=0 Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.972588 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj7jq" event={"ID":"a20e9298-d007-4ff6-b75e-cb95296f6937","Type":"ContainerDied","Data":"5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e"} Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.972616 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj7jq" event={"ID":"a20e9298-d007-4ff6-b75e-cb95296f6937","Type":"ContainerStarted","Data":"9959df9d7f11d1f20678058be84105a0773f28824eda8044c159ef5c98fa0ddc"} Feb 02 09:11:10 crc kubenswrapper[4720]: I0202 09:11:10.975091 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:11:11 crc kubenswrapper[4720]: I0202 09:11:11.987999 4720 generic.go:334] "Generic (PLEG): container finished" podID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerID="be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b" exitCode=0 Feb 02 09:11:11 crc kubenswrapper[4720]: I0202 09:11:11.988061 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj7jq" event={"ID":"a20e9298-d007-4ff6-b75e-cb95296f6937","Type":"ContainerDied","Data":"be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b"} Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.115561 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n859n"] Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.116898 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.131723 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n859n"] Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.224380 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-catalog-content\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.224452 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-utilities\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.224474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4js\" (UniqueName: \"kubernetes.io/projected/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-kube-api-access-fk4js\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.325943 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-catalog-content\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.326002 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-utilities\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.326022 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4js\" (UniqueName: \"kubernetes.io/projected/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-kube-api-access-fk4js\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.326447 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-utilities\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.326501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-catalog-content\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.352332 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4js\" (UniqueName: \"kubernetes.io/projected/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-kube-api-access-fk4js\") pod \"certified-operators-n859n\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.432527 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.746087 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n859n"] Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.995835 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj7jq" event={"ID":"a20e9298-d007-4ff6-b75e-cb95296f6937","Type":"ContainerStarted","Data":"3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e"} Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.998929 4720 generic.go:334] "Generic (PLEG): container finished" podID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerID="390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1" exitCode=0 Feb 02 09:11:12 crc kubenswrapper[4720]: I0202 09:11:12.999218 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n859n" event={"ID":"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5","Type":"ContainerDied","Data":"390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1"} Feb 02 09:11:13 crc kubenswrapper[4720]: I0202 09:11:13.000103 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n859n" event={"ID":"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5","Type":"ContainerStarted","Data":"13c73dcc658657046fcabd2a4b55b637fdfdd1acecada43d64fda92bd971797f"} Feb 02 09:11:13 crc kubenswrapper[4720]: I0202 09:11:13.021210 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nj7jq" podStartSLOduration=2.564710742 podStartE2EDuration="4.021192309s" podCreationTimestamp="2026-02-02 09:11:09 +0000 UTC" firstStartedPulling="2026-02-02 09:11:10.974703435 +0000 UTC m=+904.830328991" lastFinishedPulling="2026-02-02 09:11:12.431184992 +0000 UTC m=+906.286810558" observedRunningTime="2026-02-02 09:11:13.01921604 +0000 UTC m=+906.874841596" watchObservedRunningTime="2026-02-02 09:11:13.021192309 +0000 UTC m=+906.876817865" Feb 02 09:11:14 crc kubenswrapper[4720]: I0202 09:11:14.009321 4720 generic.go:334] "Generic (PLEG): container finished" podID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerID="6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272" exitCode=0 Feb 02 09:11:14 crc kubenswrapper[4720]: I0202 09:11:14.009440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n859n" event={"ID":"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5","Type":"ContainerDied","Data":"6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272"} Feb 02 09:11:15 crc kubenswrapper[4720]: I0202 09:11:15.017406 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n859n" event={"ID":"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5","Type":"ContainerStarted","Data":"7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54"} Feb 02 09:11:15 crc kubenswrapper[4720]: I0202 09:11:15.040460 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n859n" podStartSLOduration=1.626577411 podStartE2EDuration="3.040439512s" podCreationTimestamp="2026-02-02 09:11:12 +0000 UTC" firstStartedPulling="2026-02-02 09:11:13.000474841 +0000 UTC m=+906.856100387" lastFinishedPulling="2026-02-02 09:11:14.414336932 +0000 UTC m=+908.269962488" observedRunningTime="2026-02-02 09:11:15.034460663 +0000 UTC m=+908.890086239" watchObservedRunningTime="2026-02-02 09:11:15.040439512 +0000 UTC m=+908.896065078" Feb 02 09:11:20 crc kubenswrapper[4720]: I0202 09:11:20.112780 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:20 crc kubenswrapper[4720]: I0202 09:11:20.113210 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:20 crc kubenswrapper[4720]: I0202 09:11:20.173029 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:21 crc kubenswrapper[4720]: I0202 09:11:21.133235 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:21 crc kubenswrapper[4720]: I0202 09:11:21.204743 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj7jq"] Feb 02 09:11:22 crc kubenswrapper[4720]: I0202 09:11:22.432973 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:22 crc kubenswrapper[4720]: I0202 09:11:22.433079 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:22 crc kubenswrapper[4720]: I0202 09:11:22.499849 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.077632 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nj7jq" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="registry-server" containerID="cri-o://3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e" gracePeriod=2 Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.156915 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.500324 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-86589bcccc-n9w8d" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.515491 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.677288 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwdws\" (UniqueName: \"kubernetes.io/projected/a20e9298-d007-4ff6-b75e-cb95296f6937-kube-api-access-gwdws\") pod \"a20e9298-d007-4ff6-b75e-cb95296f6937\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.677692 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-catalog-content\") pod \"a20e9298-d007-4ff6-b75e-cb95296f6937\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.677868 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-utilities\") pod \"a20e9298-d007-4ff6-b75e-cb95296f6937\" (UID: \"a20e9298-d007-4ff6-b75e-cb95296f6937\") " Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.679030 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-utilities" (OuterVolumeSpecName: "utilities") pod "a20e9298-d007-4ff6-b75e-cb95296f6937" (UID: "a20e9298-d007-4ff6-b75e-cb95296f6937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.687106 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20e9298-d007-4ff6-b75e-cb95296f6937-kube-api-access-gwdws" (OuterVolumeSpecName: "kube-api-access-gwdws") pod "a20e9298-d007-4ff6-b75e-cb95296f6937" (UID: "a20e9298-d007-4ff6-b75e-cb95296f6937"). InnerVolumeSpecName "kube-api-access-gwdws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.718551 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a20e9298-d007-4ff6-b75e-cb95296f6937" (UID: "a20e9298-d007-4ff6-b75e-cb95296f6937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.779423 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwdws\" (UniqueName: \"kubernetes.io/projected/a20e9298-d007-4ff6-b75e-cb95296f6937-kube-api-access-gwdws\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.779474 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:23 crc kubenswrapper[4720]: I0202 09:11:23.779494 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e9298-d007-4ff6-b75e-cb95296f6937-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.018704 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n859n"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.084903 4720 generic.go:334] "Generic (PLEG): container finished" podID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerID="3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e" exitCode=0 Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.084999 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nj7jq" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.084987 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj7jq" event={"ID":"a20e9298-d007-4ff6-b75e-cb95296f6937","Type":"ContainerDied","Data":"3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e"} Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.085053 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nj7jq" event={"ID":"a20e9298-d007-4ff6-b75e-cb95296f6937","Type":"ContainerDied","Data":"9959df9d7f11d1f20678058be84105a0773f28824eda8044c159ef5c98fa0ddc"} Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.085086 4720 scope.go:117] "RemoveContainer" containerID="3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.099053 4720 scope.go:117] "RemoveContainer" containerID="be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.122523 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj7jq"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.122580 4720 scope.go:117] "RemoveContainer" containerID="5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.126907 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nj7jq"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.146089 4720 scope.go:117] "RemoveContainer" containerID="3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.146733 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e\": container with ID starting with 3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e not found: ID does not exist" containerID="3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.146781 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e"} err="failed to get container status \"3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e\": rpc error: code = NotFound desc = could not find container \"3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e\": container with ID starting with 3fbcd73ea8dfb0c3aa2fa38e246ea2d752d81a0de59d8cc44efe474aca1ed56e not found: ID does not exist" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.146813 4720 scope.go:117] "RemoveContainer" containerID="be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.147124 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b\": container with ID starting with be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b not found: ID does not exist" containerID="be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.147302 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b"} err="failed to get container status \"be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b\": rpc error: code = NotFound desc = could not find container \"be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b\": container with ID starting with be1cc18427341e73dd93ceccece13a5390316b23fda320b0c772642502cbd41b not found: ID does not exist" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.147408 4720 scope.go:117] "RemoveContainer" containerID="5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.149120 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e\": container with ID starting with 5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e not found: ID does not exist" containerID="5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.149240 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e"} err="failed to get container status \"5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e\": rpc error: code = NotFound desc = could not find container \"5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e\": container with ID starting with 5525b8b31cf4f8c4d5cd4441398c7752c2c76d43ab638da07338bf6af1e2a89e not found: ID does not exist" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.214905 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k2llz"] Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.215378 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="extract-content" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.215459 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="extract-content" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.215544 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="extract-utilities" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.215604 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="extract-utilities" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.215674 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="registry-server" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.215724 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="registry-server" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.215869 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" containerName="registry-server" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.218319 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.218905 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.219487 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.221933 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.222039 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.222053 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.222241 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dq64q" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.226160 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.285847 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c063cbf-7388-4656-ab8b-0796a145119e-metrics-certs\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.285940 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-frr-conf\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.285959 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-metrics\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.285991 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0c063cbf-7388-4656-ab8b-0796a145119e-frr-startup\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.286138 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-frr-sockets\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.286168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-reloader\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.286190 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rg2r\" (UniqueName: \"kubernetes.io/projected/0c063cbf-7388-4656-ab8b-0796a145119e-kube-api-access-9rg2r\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.286216 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fdebd093-0e66-4e15-b5b4-9052f4f4c487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vmc97\" (UID: \"fdebd093-0e66-4e15-b5b4-9052f4f4c487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.286241 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wkv\" (UniqueName: \"kubernetes.io/projected/fdebd093-0e66-4e15-b5b4-9052f4f4c487-kube-api-access-j2wkv\") pod \"frr-k8s-webhook-server-7df86c4f6c-vmc97\" (UID: \"fdebd093-0e66-4e15-b5b4-9052f4f4c487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.298535 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rzstb"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.299443 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.300788 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-slqsx" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.302450 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.302450 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.302462 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.315208 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8kt65"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.316367 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.317944 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.328951 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8kt65"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.386968 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0c063cbf-7388-4656-ab8b-0796a145119e-frr-startup\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387071 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-frr-sockets\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387091 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-metrics-certs\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-reloader\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387159 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rg2r\" (UniqueName: \"kubernetes.io/projected/0c063cbf-7388-4656-ab8b-0796a145119e-kube-api-access-9rg2r\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fdebd093-0e66-4e15-b5b4-9052f4f4c487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vmc97\" (UID: \"fdebd093-0e66-4e15-b5b4-9052f4f4c487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wkv\" (UniqueName: \"kubernetes.io/projected/fdebd093-0e66-4e15-b5b4-9052f4f4c487-kube-api-access-j2wkv\") pod \"frr-k8s-webhook-server-7df86c4f6c-vmc97\" (UID: \"fdebd093-0e66-4e15-b5b4-9052f4f4c487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387212 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdpq\" (UniqueName: \"kubernetes.io/projected/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-kube-api-access-cvdpq\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387230 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr5v6\" (UniqueName: \"kubernetes.io/projected/02e03625-5584-44e1-8fe5-2551b8d05596-kube-api-access-dr5v6\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c063cbf-7388-4656-ab8b-0796a145119e-metrics-certs\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387276 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-metrics-certs\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387292 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-frr-conf\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387308 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-metrics\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387329 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-metallb-excludel2\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387346 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-cert\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387721 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-frr-sockets\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387937 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-reloader\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.387934 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0c063cbf-7388-4656-ab8b-0796a145119e-frr-startup\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.388029 4720 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.388084 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c063cbf-7388-4656-ab8b-0796a145119e-metrics-certs podName:0c063cbf-7388-4656-ab8b-0796a145119e nodeName:}" failed. No retries permitted until 2026-02-02 09:11:24.888064705 +0000 UTC m=+918.743690261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c063cbf-7388-4656-ab8b-0796a145119e-metrics-certs") pod "frr-k8s-k2llz" (UID: "0c063cbf-7388-4656-ab8b-0796a145119e") : secret "frr-k8s-certs-secret" not found Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.388349 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-metrics\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.388445 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0c063cbf-7388-4656-ab8b-0796a145119e-frr-conf\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.399400 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fdebd093-0e66-4e15-b5b4-9052f4f4c487-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vmc97\" (UID: \"fdebd093-0e66-4e15-b5b4-9052f4f4c487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.404401 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wkv\" (UniqueName: \"kubernetes.io/projected/fdebd093-0e66-4e15-b5b4-9052f4f4c487-kube-api-access-j2wkv\") pod \"frr-k8s-webhook-server-7df86c4f6c-vmc97\" (UID: \"fdebd093-0e66-4e15-b5b4-9052f4f4c487\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.410286 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rg2r\" (UniqueName: \"kubernetes.io/projected/0c063cbf-7388-4656-ab8b-0796a145119e-kube-api-access-9rg2r\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488047 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-metrics-certs\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488114 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvdpq\" (UniqueName: \"kubernetes.io/projected/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-kube-api-access-cvdpq\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488134 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr5v6\" (UniqueName: \"kubernetes.io/projected/02e03625-5584-44e1-8fe5-2551b8d05596-kube-api-access-dr5v6\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488180 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-metrics-certs\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-metallb-excludel2\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-cert\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488621 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.488669 4720 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.488750 4720 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.488757 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-metrics-certs podName:02e03625-5584-44e1-8fe5-2551b8d05596 nodeName:}" failed. No retries permitted until 2026-02-02 09:11:24.988733581 +0000 UTC m=+918.844359137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-metrics-certs") pod "controller-6968d8fdc4-8kt65" (UID: "02e03625-5584-44e1-8fe5-2551b8d05596") : secret "controller-certs-secret" not found Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.488813 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist podName:e75ccb9d-f65f-40ac-8255-92685e9d3dd3 nodeName:}" failed. No retries permitted until 2026-02-02 09:11:24.988795343 +0000 UTC m=+918.844421019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist") pod "speaker-rzstb" (UID: "e75ccb9d-f65f-40ac-8255-92685e9d3dd3") : secret "metallb-memberlist" not found Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.488908 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-metallb-excludel2\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.490810 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.493265 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-metrics-certs\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.503291 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-cert\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.507510 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvdpq\" (UniqueName: \"kubernetes.io/projected/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-kube-api-access-cvdpq\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.522523 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr5v6\" (UniqueName: \"kubernetes.io/projected/02e03625-5584-44e1-8fe5-2551b8d05596-kube-api-access-dr5v6\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.547605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.725309 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97"] Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.894224 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c063cbf-7388-4656-ab8b-0796a145119e-metrics-certs\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.905014 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c063cbf-7388-4656-ab8b-0796a145119e-metrics-certs\") pod \"frr-k8s-k2llz\" (UID: \"0c063cbf-7388-4656-ab8b-0796a145119e\") " pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.907128 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20e9298-d007-4ff6-b75e-cb95296f6937" path="/var/lib/kubelet/pods/a20e9298-d007-4ff6-b75e-cb95296f6937/volumes" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.995515 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-metrics-certs\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:24 crc kubenswrapper[4720]: I0202 09:11:24.995576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.995716 4720 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 09:11:24 crc kubenswrapper[4720]: E0202 09:11:24.995762 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist podName:e75ccb9d-f65f-40ac-8255-92685e9d3dd3 nodeName:}" failed. No retries permitted until 2026-02-02 09:11:25.995747484 +0000 UTC m=+919.851373040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist") pod "speaker-rzstb" (UID: "e75ccb9d-f65f-40ac-8255-92685e9d3dd3") : secret "metallb-memberlist" not found Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.004240 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e03625-5584-44e1-8fe5-2551b8d05596-metrics-certs\") pod \"controller-6968d8fdc4-8kt65\" (UID: \"02e03625-5584-44e1-8fe5-2551b8d05596\") " pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.095093 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" event={"ID":"fdebd093-0e66-4e15-b5b4-9052f4f4c487","Type":"ContainerStarted","Data":"e149e180babf56bfe0c61ca2135642383b0d7e3e9d3afcbf247da63ef832b4df"} Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.095451 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n859n" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="registry-server" containerID="cri-o://7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54" gracePeriod=2 Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.139946 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.290074 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.507629 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.538431 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8kt65"] Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.707980 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-catalog-content\") pod \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.708371 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-utilities\") pod \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.708445 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk4js\" (UniqueName: \"kubernetes.io/projected/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-kube-api-access-fk4js\") pod \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\" (UID: \"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5\") " Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.709844 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-utilities" (OuterVolumeSpecName: "utilities") pod "a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" (UID: "a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.719191 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-kube-api-access-fk4js" (OuterVolumeSpecName: "kube-api-access-fk4js") pod "a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" (UID: "a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5"). InnerVolumeSpecName "kube-api-access-fk4js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.784059 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" (UID: "a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.810538 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.810594 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:25 crc kubenswrapper[4720]: I0202 09:11:25.810614 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk4js\" (UniqueName: \"kubernetes.io/projected/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5-kube-api-access-fk4js\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.013689 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.017251 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e75ccb9d-f65f-40ac-8255-92685e9d3dd3-memberlist\") pod \"speaker-rzstb\" (UID: \"e75ccb9d-f65f-40ac-8255-92685e9d3dd3\") " pod="metallb-system/speaker-rzstb" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.112691 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rzstb" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.115643 4720 generic.go:334] "Generic (PLEG): container finished" podID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerID="7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54" exitCode=0 Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.115902 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n859n" event={"ID":"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5","Type":"ContainerDied","Data":"7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54"} Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.115933 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n859n" event={"ID":"a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5","Type":"ContainerDied","Data":"13c73dcc658657046fcabd2a4b55b637fdfdd1acecada43d64fda92bd971797f"} Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.115954 4720 scope.go:117] "RemoveContainer" containerID="7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.116077 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n859n" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.128768 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8kt65" event={"ID":"02e03625-5584-44e1-8fe5-2551b8d05596","Type":"ContainerStarted","Data":"48d4e2a3760843d65fcc93357ca4cbb930a1081d1dda2499e0f6f53d1ab0873e"} Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.128839 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8kt65" event={"ID":"02e03625-5584-44e1-8fe5-2551b8d05596","Type":"ContainerStarted","Data":"23671ee7dbee1b2708d09a217c5c9dddafddf08b699205d40ba6f3b4f70b20a2"} Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.128856 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8kt65" event={"ID":"02e03625-5584-44e1-8fe5-2551b8d05596","Type":"ContainerStarted","Data":"05cec65e0e0a25417dbde72050421f8ec62cee00cc587b9ae1818cc6babca6f1"} Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.128948 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.131324 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"371fe771c26e56ba2a891da9da262bf0a5238bdbc59a9e66ef488370df7efb1e"} Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.150972 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n859n"] Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.154120 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n859n"] Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.174589 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8kt65" podStartSLOduration=2.17456507 podStartE2EDuration="2.17456507s" podCreationTimestamp="2026-02-02 09:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:11:26.167480253 +0000 UTC m=+920.023105839" watchObservedRunningTime="2026-02-02 09:11:26.17456507 +0000 UTC m=+920.030190666" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.179235 4720 scope.go:117] "RemoveContainer" containerID="6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272" Feb 02 09:11:26 crc kubenswrapper[4720]: W0202 09:11:26.185215 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75ccb9d_f65f_40ac_8255_92685e9d3dd3.slice/crio-ab1965844fd0543eeb51a081aa0192980b130eab3a7a95087cca643021867a75 WatchSource:0}: Error finding container ab1965844fd0543eeb51a081aa0192980b130eab3a7a95087cca643021867a75: Status 404 returned error can't find the container with id ab1965844fd0543eeb51a081aa0192980b130eab3a7a95087cca643021867a75 Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.236529 4720 scope.go:117] "RemoveContainer" containerID="390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.309351 4720 scope.go:117] "RemoveContainer" containerID="7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54" Feb 02 09:11:26 crc kubenswrapper[4720]: E0202 09:11:26.311067 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54\": container with ID starting with 7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54 not found: ID does not exist" containerID="7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.311106 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54"} err="failed to get container status \"7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54\": rpc error: code = NotFound desc = could not find container \"7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54\": container with ID starting with 7c5eee39f2f1892ce5fc237ee21f1663faddf44e93a5ec9302c0562c67c0ed54 not found: ID does not exist" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.311134 4720 scope.go:117] "RemoveContainer" containerID="6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272" Feb 02 09:11:26 crc kubenswrapper[4720]: E0202 09:11:26.311587 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272\": container with ID starting with 6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272 not found: ID does not exist" containerID="6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.311623 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272"} err="failed to get container status \"6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272\": rpc error: code = NotFound desc = could not find container \"6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272\": container with ID starting with 6cf4dd52501dc2170a5e5193337e19314699f52985e4a2cd82d0b1b95d98f272 not found: ID does not exist" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.311643 4720 scope.go:117] "RemoveContainer" containerID="390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1" Feb 02 09:11:26 crc kubenswrapper[4720]: E0202 09:11:26.312087 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1\": container with ID starting with 390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1 not found: ID does not exist" containerID="390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.312112 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1"} err="failed to get container status \"390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1\": rpc error: code = NotFound desc = could not find container \"390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1\": container with ID starting with 390c7ad90e122cd9e54a057b021852f0df332f241260002c61d7aef9cfbfd6a1 not found: ID does not exist" Feb 02 09:11:26 crc kubenswrapper[4720]: I0202 09:11:26.912123 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" path="/var/lib/kubelet/pods/a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5/volumes" Feb 02 09:11:27 crc kubenswrapper[4720]: I0202 09:11:27.142399 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rzstb" event={"ID":"e75ccb9d-f65f-40ac-8255-92685e9d3dd3","Type":"ContainerStarted","Data":"d52036928c51866df97ffb411435a5f12900a94843281d99274243208806fa89"} Feb 02 09:11:27 crc kubenswrapper[4720]: I0202 09:11:27.142444 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rzstb" event={"ID":"e75ccb9d-f65f-40ac-8255-92685e9d3dd3","Type":"ContainerStarted","Data":"6eb55bc8ad1231e5d194079ca14553207470f14db72ba1467010d1554b0817e8"} Feb 02 09:11:27 crc kubenswrapper[4720]: I0202 09:11:27.142454 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rzstb" event={"ID":"e75ccb9d-f65f-40ac-8255-92685e9d3dd3","Type":"ContainerStarted","Data":"ab1965844fd0543eeb51a081aa0192980b130eab3a7a95087cca643021867a75"} Feb 02 09:11:27 crc kubenswrapper[4720]: I0202 09:11:27.142707 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rzstb" Feb 02 09:11:27 crc kubenswrapper[4720]: I0202 09:11:27.163605 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rzstb" podStartSLOduration=3.163581372 podStartE2EDuration="3.163581372s" podCreationTimestamp="2026-02-02 09:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:11:27.159690025 +0000 UTC m=+921.015315591" watchObservedRunningTime="2026-02-02 09:11:27.163581372 +0000 UTC m=+921.019206928" Feb 02 09:11:32 crc kubenswrapper[4720]: I0202 09:11:32.177226 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" event={"ID":"fdebd093-0e66-4e15-b5b4-9052f4f4c487","Type":"ContainerStarted","Data":"dca7632dc204f989b0b7bb93dbe8682fbaeec4b0edbfc229c0b58e5e637646c5"} Feb 02 09:11:32 crc kubenswrapper[4720]: I0202 09:11:32.177998 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:32 crc kubenswrapper[4720]: I0202 09:11:32.179914 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"e1f94007aa6d705009b5bac9d7409f4ae9e8685e1e91d823b83f4bddffef18bc"} Feb 02 09:11:32 crc kubenswrapper[4720]: I0202 09:11:32.204552 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" podStartSLOduration=1.002601846 podStartE2EDuration="8.204533566s" podCreationTimestamp="2026-02-02 09:11:24 +0000 UTC" firstStartedPulling="2026-02-02 09:11:24.733263154 +0000 UTC m=+918.588888710" lastFinishedPulling="2026-02-02 09:11:31.935194844 +0000 UTC m=+925.790820430" observedRunningTime="2026-02-02 09:11:32.19911019 +0000 UTC m=+926.054735746" watchObservedRunningTime="2026-02-02 09:11:32.204533566 +0000 UTC m=+926.060159122" Feb 02 09:11:33 crc kubenswrapper[4720]: I0202 09:11:33.192552 4720 generic.go:334] "Generic (PLEG): container finished" podID="0c063cbf-7388-4656-ab8b-0796a145119e" containerID="e1f94007aa6d705009b5bac9d7409f4ae9e8685e1e91d823b83f4bddffef18bc" exitCode=0 Feb 02 09:11:33 crc kubenswrapper[4720]: I0202 09:11:33.193061 4720 generic.go:334] "Generic (PLEG): container finished" podID="0c063cbf-7388-4656-ab8b-0796a145119e" containerID="043f9ef3bd70d76a3086c80c5ece3c38c9105d213ffda88c7e87e794dc8e43c1" exitCode=0 Feb 02 09:11:33 crc kubenswrapper[4720]: I0202 09:11:33.192689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerDied","Data":"e1f94007aa6d705009b5bac9d7409f4ae9e8685e1e91d823b83f4bddffef18bc"} Feb 02 09:11:33 crc kubenswrapper[4720]: I0202 09:11:33.193165 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerDied","Data":"043f9ef3bd70d76a3086c80c5ece3c38c9105d213ffda88c7e87e794dc8e43c1"} Feb 02 09:11:34 crc kubenswrapper[4720]: I0202 09:11:34.205709 4720 generic.go:334] "Generic (PLEG): container finished" podID="0c063cbf-7388-4656-ab8b-0796a145119e" containerID="e4fc4a3b0b9d88435c749bd7474c3ff6d3c25aa4030986c9902f6180e381b996" exitCode=0 Feb 02 09:11:34 crc kubenswrapper[4720]: I0202 09:11:34.205836 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerDied","Data":"e4fc4a3b0b9d88435c749bd7474c3ff6d3c25aa4030986c9902f6180e381b996"} Feb 02 09:11:35 crc kubenswrapper[4720]: I0202 09:11:35.219025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"2126d7f57a1a81356b57c7f62677367d04c7ef84f9b037c47c1539a4c45ed5ba"} Feb 02 09:11:35 crc kubenswrapper[4720]: I0202 09:11:35.219359 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"9b7330e0797364c335b5151f9c7ab1669d2002ab7c3a6958aa5922b7c07530f1"} Feb 02 09:11:35 crc kubenswrapper[4720]: I0202 09:11:35.219370 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"7b9715d46e445d427b9d20de74faa162b4efed10521993145833dcbe6fc2e2b7"} Feb 02 09:11:35 crc kubenswrapper[4720]: I0202 09:11:35.219379 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"e105c9dc86ec0b59fd293e672d5ec12db5560eb46ee4a7d58b574e6fa43582d0"} Feb 02 09:11:35 crc kubenswrapper[4720]: I0202 09:11:35.219389 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"aa0e15e990c8f2056e9c55e13ceab68c733381c03a8213285a7c4bbd21d59d67"} Feb 02 09:11:35 crc kubenswrapper[4720]: I0202 09:11:35.293706 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8kt65" Feb 02 09:11:36 crc kubenswrapper[4720]: I0202 09:11:36.118603 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rzstb" Feb 02 09:11:36 crc kubenswrapper[4720]: I0202 09:11:36.233094 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k2llz" event={"ID":"0c063cbf-7388-4656-ab8b-0796a145119e","Type":"ContainerStarted","Data":"e780e1359f81bbcd368ae04f5b4b8d3ee967738285e6289371e25217c2cc138a"} Feb 02 09:11:36 crc kubenswrapper[4720]: I0202 09:11:36.233521 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:36 crc kubenswrapper[4720]: I0202 09:11:36.271194 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k2llz" podStartSLOduration=5.755973982 podStartE2EDuration="12.271160305s" podCreationTimestamp="2026-02-02 09:11:24 +0000 UTC" firstStartedPulling="2026-02-02 09:11:25.412446141 +0000 UTC m=+919.268071697" lastFinishedPulling="2026-02-02 09:11:31.927632434 +0000 UTC m=+925.783258020" observedRunningTime="2026-02-02 09:11:36.263852332 +0000 UTC m=+930.119477928" watchObservedRunningTime="2026-02-02 09:11:36.271160305 +0000 UTC m=+930.126785891" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.783317 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k94wq"] Feb 02 09:11:38 crc kubenswrapper[4720]: E0202 09:11:38.784671 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="extract-utilities" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.784757 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="extract-utilities" Feb 02 09:11:38 crc kubenswrapper[4720]: E0202 09:11:38.785011 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="registry-server" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.785082 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="registry-server" Feb 02 09:11:38 crc kubenswrapper[4720]: E0202 09:11:38.785167 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="extract-content" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.785233 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="extract-content" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.785439 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ce9ed6-28ab-43a1-93cd-4db1849ef4b5" containerName="registry-server" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.785982 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.794125 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-pzfzf" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.794779 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.794801 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.818828 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k94wq"] Feb 02 09:11:38 crc kubenswrapper[4720]: I0202 09:11:38.918229 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz4c\" (UniqueName: \"kubernetes.io/projected/15843d5e-54c4-498c-9a8c-2346a2c84913-kube-api-access-klz4c\") pod \"openstack-operator-index-k94wq\" (UID: \"15843d5e-54c4-498c-9a8c-2346a2c84913\") " pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:39 crc kubenswrapper[4720]: I0202 09:11:39.019587 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klz4c\" (UniqueName: \"kubernetes.io/projected/15843d5e-54c4-498c-9a8c-2346a2c84913-kube-api-access-klz4c\") pod \"openstack-operator-index-k94wq\" (UID: \"15843d5e-54c4-498c-9a8c-2346a2c84913\") " pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:39 crc kubenswrapper[4720]: I0202 09:11:39.042005 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz4c\" (UniqueName: \"kubernetes.io/projected/15843d5e-54c4-498c-9a8c-2346a2c84913-kube-api-access-klz4c\") pod \"openstack-operator-index-k94wq\" (UID: \"15843d5e-54c4-498c-9a8c-2346a2c84913\") " pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:39 crc kubenswrapper[4720]: I0202 09:11:39.134939 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:39 crc kubenswrapper[4720]: I0202 09:11:39.382835 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k94wq"] Feb 02 09:11:39 crc kubenswrapper[4720]: W0202 09:11:39.385410 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15843d5e_54c4_498c_9a8c_2346a2c84913.slice/crio-b3d07b471a5a4a10ab9de31edbb046035ddd66f121c6329ea1afbf1e601b7783 WatchSource:0}: Error finding container b3d07b471a5a4a10ab9de31edbb046035ddd66f121c6329ea1afbf1e601b7783: Status 404 returned error can't find the container with id b3d07b471a5a4a10ab9de31edbb046035ddd66f121c6329ea1afbf1e601b7783 Feb 02 09:11:40 crc kubenswrapper[4720]: I0202 09:11:40.141230 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:40 crc kubenswrapper[4720]: I0202 09:11:40.185122 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:40 crc kubenswrapper[4720]: I0202 09:11:40.262534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k94wq" event={"ID":"15843d5e-54c4-498c-9a8c-2346a2c84913","Type":"ContainerStarted","Data":"b3d07b471a5a4a10ab9de31edbb046035ddd66f121c6329ea1afbf1e601b7783"} Feb 02 09:11:42 crc kubenswrapper[4720]: I0202 09:11:42.955626 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k94wq"] Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.284735 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k94wq" event={"ID":"15843d5e-54c4-498c-9a8c-2346a2c84913","Type":"ContainerStarted","Data":"2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5"} Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.308530 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k94wq" podStartSLOduration=2.16119608 podStartE2EDuration="5.308500071s" podCreationTimestamp="2026-02-02 09:11:38 +0000 UTC" firstStartedPulling="2026-02-02 09:11:39.388444975 +0000 UTC m=+933.244070571" lastFinishedPulling="2026-02-02 09:11:42.535749016 +0000 UTC m=+936.391374562" observedRunningTime="2026-02-02 09:11:43.305190208 +0000 UTC m=+937.160815804" watchObservedRunningTime="2026-02-02 09:11:43.308500071 +0000 UTC m=+937.164125667" Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.768498 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xpqwh"] Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.769624 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.778087 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xpqwh"] Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.812942 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpk4\" (UniqueName: \"kubernetes.io/projected/63c49313-5150-41a6-aa66-501aee8efe41-kube-api-access-hxpk4\") pod \"openstack-operator-index-xpqwh\" (UID: \"63c49313-5150-41a6-aa66-501aee8efe41\") " pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.913853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpk4\" (UniqueName: \"kubernetes.io/projected/63c49313-5150-41a6-aa66-501aee8efe41-kube-api-access-hxpk4\") pod \"openstack-operator-index-xpqwh\" (UID: \"63c49313-5150-41a6-aa66-501aee8efe41\") " pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:43 crc kubenswrapper[4720]: I0202 09:11:43.940104 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpk4\" (UniqueName: \"kubernetes.io/projected/63c49313-5150-41a6-aa66-501aee8efe41-kube-api-access-hxpk4\") pod \"openstack-operator-index-xpqwh\" (UID: \"63c49313-5150-41a6-aa66-501aee8efe41\") " pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.130019 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.298516 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-k94wq" podUID="15843d5e-54c4-498c-9a8c-2346a2c84913" containerName="registry-server" containerID="cri-o://2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5" gracePeriod=2 Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.454935 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xpqwh"] Feb 02 09:11:44 crc kubenswrapper[4720]: W0202 09:11:44.466053 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63c49313_5150_41a6_aa66_501aee8efe41.slice/crio-dbae81177111b90931e95a8c91236d233bdde484a4a2a6799c8a4ed1416ad875 WatchSource:0}: Error finding container dbae81177111b90931e95a8c91236d233bdde484a4a2a6799c8a4ed1416ad875: Status 404 returned error can't find the container with id dbae81177111b90931e95a8c91236d233bdde484a4a2a6799c8a4ed1416ad875 Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.556950 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vmc97" Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.730894 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.829918 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klz4c\" (UniqueName: \"kubernetes.io/projected/15843d5e-54c4-498c-9a8c-2346a2c84913-kube-api-access-klz4c\") pod \"15843d5e-54c4-498c-9a8c-2346a2c84913\" (UID: \"15843d5e-54c4-498c-9a8c-2346a2c84913\") " Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.837414 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15843d5e-54c4-498c-9a8c-2346a2c84913-kube-api-access-klz4c" (OuterVolumeSpecName: "kube-api-access-klz4c") pod "15843d5e-54c4-498c-9a8c-2346a2c84913" (UID: "15843d5e-54c4-498c-9a8c-2346a2c84913"). InnerVolumeSpecName "kube-api-access-klz4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:11:44 crc kubenswrapper[4720]: I0202 09:11:44.932385 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klz4c\" (UniqueName: \"kubernetes.io/projected/15843d5e-54c4-498c-9a8c-2346a2c84913-kube-api-access-klz4c\") on node \"crc\" DevicePath \"\"" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.144763 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k2llz" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.306295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xpqwh" event={"ID":"63c49313-5150-41a6-aa66-501aee8efe41","Type":"ContainerStarted","Data":"1d17b4367d4eafccc6caafa8af90d65b4a107c8c3e37c441e131e384b0bf77c6"} Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.306344 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xpqwh" event={"ID":"63c49313-5150-41a6-aa66-501aee8efe41","Type":"ContainerStarted","Data":"dbae81177111b90931e95a8c91236d233bdde484a4a2a6799c8a4ed1416ad875"} Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.308165 4720 generic.go:334] "Generic (PLEG): container finished" podID="15843d5e-54c4-498c-9a8c-2346a2c84913" containerID="2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5" exitCode=0 Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.308197 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k94wq" event={"ID":"15843d5e-54c4-498c-9a8c-2346a2c84913","Type":"ContainerDied","Data":"2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5"} Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.308216 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k94wq" event={"ID":"15843d5e-54c4-498c-9a8c-2346a2c84913","Type":"ContainerDied","Data":"b3d07b471a5a4a10ab9de31edbb046035ddd66f121c6329ea1afbf1e601b7783"} Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.308216 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k94wq" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.308235 4720 scope.go:117] "RemoveContainer" containerID="2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.331258 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xpqwh" podStartSLOduration=2.256579236 podStartE2EDuration="2.331236721s" podCreationTimestamp="2026-02-02 09:11:43 +0000 UTC" firstStartedPulling="2026-02-02 09:11:44.469667476 +0000 UTC m=+938.325293052" lastFinishedPulling="2026-02-02 09:11:44.544324941 +0000 UTC m=+938.399950537" observedRunningTime="2026-02-02 09:11:45.327728294 +0000 UTC m=+939.183353870" watchObservedRunningTime="2026-02-02 09:11:45.331236721 +0000 UTC m=+939.186862287" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.338089 4720 scope.go:117] "RemoveContainer" containerID="2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5" Feb 02 09:11:45 crc kubenswrapper[4720]: E0202 09:11:45.338567 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5\": container with ID starting with 2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5 not found: ID does not exist" containerID="2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.338617 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5"} err="failed to get container status \"2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5\": rpc error: code = NotFound desc = could not find container \"2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5\": container with ID starting with 2cbcb89bb32665cc6a14dffa7bb3d2be3d41e9ced46d555451a85565f6136ff5 not found: ID does not exist" Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.344911 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k94wq"] Feb 02 09:11:45 crc kubenswrapper[4720]: I0202 09:11:45.352339 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-k94wq"] Feb 02 09:11:46 crc kubenswrapper[4720]: I0202 09:11:46.899305 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15843d5e-54c4-498c-9a8c-2346a2c84913" path="/var/lib/kubelet/pods/15843d5e-54c4-498c-9a8c-2346a2c84913/volumes" Feb 02 09:11:54 crc kubenswrapper[4720]: I0202 09:11:54.131313 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:54 crc kubenswrapper[4720]: I0202 09:11:54.132013 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:54 crc kubenswrapper[4720]: I0202 09:11:54.164699 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:11:54 crc kubenswrapper[4720]: I0202 09:11:54.400833 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xpqwh" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.085261 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l"] Feb 02 09:12:01 crc kubenswrapper[4720]: E0202 09:12:01.086309 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15843d5e-54c4-498c-9a8c-2346a2c84913" containerName="registry-server" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.086332 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="15843d5e-54c4-498c-9a8c-2346a2c84913" containerName="registry-server" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.086550 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="15843d5e-54c4-498c-9a8c-2346a2c84913" containerName="registry-server" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.087976 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.091307 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-twrsv" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.102762 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l"] Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.175431 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-bundle\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.175603 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-util\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.175716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg74w\" (UniqueName: \"kubernetes.io/projected/07ebfa1a-d538-4a12-87dd-cc8658df99a3-kube-api-access-wg74w\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.276919 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-bundle\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.276989 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-util\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.277058 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg74w\" (UniqueName: \"kubernetes.io/projected/07ebfa1a-d538-4a12-87dd-cc8658df99a3-kube-api-access-wg74w\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.278306 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-bundle\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.278397 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-util\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.316419 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg74w\" (UniqueName: \"kubernetes.io/projected/07ebfa1a-d538-4a12-87dd-cc8658df99a3-kube-api-access-wg74w\") pod \"e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.418586 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:01 crc kubenswrapper[4720]: I0202 09:12:01.888347 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l"] Feb 02 09:12:01 crc kubenswrapper[4720]: W0202 09:12:01.896907 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ebfa1a_d538_4a12_87dd_cc8658df99a3.slice/crio-0757c5e4df2363711daaa2433797ee935c78f75a0ed62b1a2fe2c2700fb19f94 WatchSource:0}: Error finding container 0757c5e4df2363711daaa2433797ee935c78f75a0ed62b1a2fe2c2700fb19f94: Status 404 returned error can't find the container with id 0757c5e4df2363711daaa2433797ee935c78f75a0ed62b1a2fe2c2700fb19f94 Feb 02 09:12:02 crc kubenswrapper[4720]: I0202 09:12:02.447986 4720 generic.go:334] "Generic (PLEG): container finished" podID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerID="24f80cb564dfbf34cc161c479fe6bed164853180241a94d4118837f7cfa54bf6" exitCode=0 Feb 02 09:12:02 crc kubenswrapper[4720]: I0202 09:12:02.448050 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" event={"ID":"07ebfa1a-d538-4a12-87dd-cc8658df99a3","Type":"ContainerDied","Data":"24f80cb564dfbf34cc161c479fe6bed164853180241a94d4118837f7cfa54bf6"} Feb 02 09:12:02 crc kubenswrapper[4720]: I0202 09:12:02.448092 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" event={"ID":"07ebfa1a-d538-4a12-87dd-cc8658df99a3","Type":"ContainerStarted","Data":"0757c5e4df2363711daaa2433797ee935c78f75a0ed62b1a2fe2c2700fb19f94"} Feb 02 09:12:03 crc kubenswrapper[4720]: I0202 09:12:03.461662 4720 generic.go:334] "Generic (PLEG): container finished" podID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerID="68b891a7f5f5683a94e6225466f2f15171bdadf718de86655c46ca2ec9f6cda3" exitCode=0 Feb 02 09:12:03 crc kubenswrapper[4720]: I0202 09:12:03.461767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" event={"ID":"07ebfa1a-d538-4a12-87dd-cc8658df99a3","Type":"ContainerDied","Data":"68b891a7f5f5683a94e6225466f2f15171bdadf718de86655c46ca2ec9f6cda3"} Feb 02 09:12:04 crc kubenswrapper[4720]: I0202 09:12:04.472629 4720 generic.go:334] "Generic (PLEG): container finished" podID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerID="8a262d4fd59bc688c3f6ffd42f8c70826c516704e5453ff07d5dae1a91477ba3" exitCode=0 Feb 02 09:12:04 crc kubenswrapper[4720]: I0202 09:12:04.472691 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" event={"ID":"07ebfa1a-d538-4a12-87dd-cc8658df99a3","Type":"ContainerDied","Data":"8a262d4fd59bc688c3f6ffd42f8c70826c516704e5453ff07d5dae1a91477ba3"} Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.766311 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.947608 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-bundle\") pod \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.948354 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-util\") pod \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.948429 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg74w\" (UniqueName: \"kubernetes.io/projected/07ebfa1a-d538-4a12-87dd-cc8658df99a3-kube-api-access-wg74w\") pod \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\" (UID: \"07ebfa1a-d538-4a12-87dd-cc8658df99a3\") " Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.951699 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-bundle" (OuterVolumeSpecName: "bundle") pod "07ebfa1a-d538-4a12-87dd-cc8658df99a3" (UID: "07ebfa1a-d538-4a12-87dd-cc8658df99a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.968228 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ebfa1a-d538-4a12-87dd-cc8658df99a3-kube-api-access-wg74w" (OuterVolumeSpecName: "kube-api-access-wg74w") pod "07ebfa1a-d538-4a12-87dd-cc8658df99a3" (UID: "07ebfa1a-d538-4a12-87dd-cc8658df99a3"). InnerVolumeSpecName "kube-api-access-wg74w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:12:05 crc kubenswrapper[4720]: I0202 09:12:05.988134 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-util" (OuterVolumeSpecName: "util") pod "07ebfa1a-d538-4a12-87dd-cc8658df99a3" (UID: "07ebfa1a-d538-4a12-87dd-cc8658df99a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:12:06 crc kubenswrapper[4720]: I0202 09:12:06.049869 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-util\") on node \"crc\" DevicePath \"\"" Feb 02 09:12:06 crc kubenswrapper[4720]: I0202 09:12:06.049961 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg74w\" (UniqueName: \"kubernetes.io/projected/07ebfa1a-d538-4a12-87dd-cc8658df99a3-kube-api-access-wg74w\") on node \"crc\" DevicePath \"\"" Feb 02 09:12:06 crc kubenswrapper[4720]: I0202 09:12:06.049984 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/07ebfa1a-d538-4a12-87dd-cc8658df99a3-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:12:06 crc kubenswrapper[4720]: I0202 09:12:06.494068 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" event={"ID":"07ebfa1a-d538-4a12-87dd-cc8658df99a3","Type":"ContainerDied","Data":"0757c5e4df2363711daaa2433797ee935c78f75a0ed62b1a2fe2c2700fb19f94"} Feb 02 09:12:06 crc kubenswrapper[4720]: I0202 09:12:06.494121 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0757c5e4df2363711daaa2433797ee935c78f75a0ed62b1a2fe2c2700fb19f94" Feb 02 09:12:06 crc kubenswrapper[4720]: I0202 09:12:06.494162 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.719031 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd"] Feb 02 09:12:13 crc kubenswrapper[4720]: E0202 09:12:13.719757 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="extract" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.719772 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="extract" Feb 02 09:12:13 crc kubenswrapper[4720]: E0202 09:12:13.719785 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="pull" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.719791 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="pull" Feb 02 09:12:13 crc kubenswrapper[4720]: E0202 09:12:13.719809 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="util" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.719814 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="util" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.719921 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ebfa1a-d538-4a12-87dd-cc8658df99a3" containerName="extract" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.720340 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.723516 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-58bnr" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.750225 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd"] Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.865277 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzhc\" (UniqueName: \"kubernetes.io/projected/8f295d56-98ca-48ee-a63a-32956f7693f7-kube-api-access-trzhc\") pod \"openstack-operator-controller-init-5b57c84fd5-qzpjd\" (UID: \"8f295d56-98ca-48ee-a63a-32956f7693f7\") " pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:13 crc kubenswrapper[4720]: I0202 09:12:13.966632 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trzhc\" (UniqueName: \"kubernetes.io/projected/8f295d56-98ca-48ee-a63a-32956f7693f7-kube-api-access-trzhc\") pod \"openstack-operator-controller-init-5b57c84fd5-qzpjd\" (UID: \"8f295d56-98ca-48ee-a63a-32956f7693f7\") " pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:14 crc kubenswrapper[4720]: I0202 09:12:13.999063 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trzhc\" (UniqueName: \"kubernetes.io/projected/8f295d56-98ca-48ee-a63a-32956f7693f7-kube-api-access-trzhc\") pod \"openstack-operator-controller-init-5b57c84fd5-qzpjd\" (UID: \"8f295d56-98ca-48ee-a63a-32956f7693f7\") " pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:14 crc kubenswrapper[4720]: I0202 09:12:14.045268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:14 crc kubenswrapper[4720]: I0202 09:12:14.323318 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd"] Feb 02 09:12:14 crc kubenswrapper[4720]: I0202 09:12:14.559353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" event={"ID":"8f295d56-98ca-48ee-a63a-32956f7693f7","Type":"ContainerStarted","Data":"4284067aee2b7702b19181ee923120cc0230d9aadf9c449481d512dee32c5145"} Feb 02 09:12:18 crc kubenswrapper[4720]: I0202 09:12:18.592255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" event={"ID":"8f295d56-98ca-48ee-a63a-32956f7693f7","Type":"ContainerStarted","Data":"1ab0245e59ab8406dce4a8b258ecb46c6bb5decfe7ce966a5d1f45365330d3e8"} Feb 02 09:12:18 crc kubenswrapper[4720]: I0202 09:12:18.592824 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:18 crc kubenswrapper[4720]: I0202 09:12:18.629757 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" podStartSLOduration=1.739681741 podStartE2EDuration="5.629727808s" podCreationTimestamp="2026-02-02 09:12:13 +0000 UTC" firstStartedPulling="2026-02-02 09:12:14.328047902 +0000 UTC m=+968.183673488" lastFinishedPulling="2026-02-02 09:12:18.218093959 +0000 UTC m=+972.073719555" observedRunningTime="2026-02-02 09:12:18.620750003 +0000 UTC m=+972.476375609" watchObservedRunningTime="2026-02-02 09:12:18.629727808 +0000 UTC m=+972.485353404" Feb 02 09:12:24 crc kubenswrapper[4720]: I0202 09:12:24.049507 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5b57c84fd5-qzpjd" Feb 02 09:12:47 crc kubenswrapper[4720]: I0202 09:12:47.902435 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:12:47 crc kubenswrapper[4720]: I0202 09:12:47.903070 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.852923 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.854248 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.856324 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zmbvv" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.866317 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.896760 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.898020 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.900375 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-47662" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.903778 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.918197 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.919231 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.923014 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kcztl" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.924449 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76w8d\" (UniqueName: \"kubernetes.io/projected/0773964a-e514-4efc-8e88-ea5e71d4a7eb-kube-api-access-76w8d\") pod \"cinder-operator-controller-manager-8d874c8fc-txm2d\" (UID: \"0773964a-e514-4efc-8e88-ea5e71d4a7eb\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.924550 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7h4w\" (UniqueName: \"kubernetes.io/projected/74c9a454-0e13-4b29-89d5-cbfd77d7db21-kube-api-access-j7h4w\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-ts2bs\" (UID: \"74c9a454-0e13-4b29-89d5-cbfd77d7db21\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.927592 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.928333 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.932056 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.932753 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.935090 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-h94sj" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.935258 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rg5kg" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.953856 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.971302 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.977979 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.978826 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.978994 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf"] Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.983536 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j7tdd" Feb 02 09:13:00 crc kubenswrapper[4720]: I0202 09:13:00.995405 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.001314 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.002064 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.003853 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.004122 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s5w5z" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.015618 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025482 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6qtn\" (UniqueName: \"kubernetes.io/projected/ae44cd5d-4fe1-4268-b247-d03075fd37b2-kube-api-access-l6qtn\") pod \"horizon-operator-controller-manager-5fb775575f-42lrb\" (UID: \"ae44cd5d-4fe1-4268-b247-d03075fd37b2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025540 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76w8d\" (UniqueName: \"kubernetes.io/projected/0773964a-e514-4efc-8e88-ea5e71d4a7eb-kube-api-access-76w8d\") pod \"cinder-operator-controller-manager-8d874c8fc-txm2d\" (UID: \"0773964a-e514-4efc-8e88-ea5e71d4a7eb\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv7q9\" (UniqueName: \"kubernetes.io/projected/2512bb69-cdd5-4288-a023-08271514a5ed-kube-api-access-rv7q9\") pod \"designate-operator-controller-manager-6d9697b7f4-2v5h2\" (UID: \"2512bb69-cdd5-4288-a023-08271514a5ed\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7h4w\" (UniqueName: \"kubernetes.io/projected/74c9a454-0e13-4b29-89d5-cbfd77d7db21-kube-api-access-j7h4w\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-ts2bs\" (UID: \"74c9a454-0e13-4b29-89d5-cbfd77d7db21\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025661 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxhwf\" (UniqueName: \"kubernetes.io/projected/13d9ccbd-a49d-4b71-9c76-251ad5309b8d-kube-api-access-bxhwf\") pod \"heat-operator-controller-manager-69d6db494d-bw8tf\" (UID: \"13d9ccbd-a49d-4b71-9c76-251ad5309b8d\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqd2\" (UniqueName: \"kubernetes.io/projected/f48341fa-8eb8-49f2-b177-2c10de4db8fd-kube-api-access-wtqd2\") pod \"glance-operator-controller-manager-8886f4c47-ccx5d\" (UID: \"f48341fa-8eb8-49f2-b177-2c10de4db8fd\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025713 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.025733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rx8z\" (UniqueName: \"kubernetes.io/projected/98ee1d10-d444-4de0-a20c-99258ae4c5da-kube-api-access-4rx8z\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.030064 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.030850 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.039206 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fm8mn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.052944 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.053665 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.056766 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-td99c" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.057733 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76w8d\" (UniqueName: \"kubernetes.io/projected/0773964a-e514-4efc-8e88-ea5e71d4a7eb-kube-api-access-76w8d\") pod \"cinder-operator-controller-manager-8d874c8fc-txm2d\" (UID: \"0773964a-e514-4efc-8e88-ea5e71d4a7eb\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.060030 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.067116 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7h4w\" (UniqueName: \"kubernetes.io/projected/74c9a454-0e13-4b29-89d5-cbfd77d7db21-kube-api-access-j7h4w\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-ts2bs\" (UID: \"74c9a454-0e13-4b29-89d5-cbfd77d7db21\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.092162 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.092964 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.096593 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rvmhd" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.101436 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.128466 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxhwf\" (UniqueName: \"kubernetes.io/projected/13d9ccbd-a49d-4b71-9c76-251ad5309b8d-kube-api-access-bxhwf\") pod \"heat-operator-controller-manager-69d6db494d-bw8tf\" (UID: \"13d9ccbd-a49d-4b71-9c76-251ad5309b8d\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.128532 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqd2\" (UniqueName: \"kubernetes.io/projected/f48341fa-8eb8-49f2-b177-2c10de4db8fd-kube-api-access-wtqd2\") pod \"glance-operator-controller-manager-8886f4c47-ccx5d\" (UID: \"f48341fa-8eb8-49f2-b177-2c10de4db8fd\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.128568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.128613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rx8z\" (UniqueName: \"kubernetes.io/projected/98ee1d10-d444-4de0-a20c-99258ae4c5da-kube-api-access-4rx8z\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.128651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6qtn\" (UniqueName: \"kubernetes.io/projected/ae44cd5d-4fe1-4268-b247-d03075fd37b2-kube-api-access-l6qtn\") pod \"horizon-operator-controller-manager-5fb775575f-42lrb\" (UID: \"ae44cd5d-4fe1-4268-b247-d03075fd37b2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.128690 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv7q9\" (UniqueName: \"kubernetes.io/projected/2512bb69-cdd5-4288-a023-08271514a5ed-kube-api-access-rv7q9\") pod \"designate-operator-controller-manager-6d9697b7f4-2v5h2\" (UID: \"2512bb69-cdd5-4288-a023-08271514a5ed\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:01 crc kubenswrapper[4720]: E0202 09:13:01.129631 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:01 crc kubenswrapper[4720]: E0202 09:13:01.129698 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert podName:98ee1d10-d444-4de0-a20c-99258ae4c5da nodeName:}" failed. No retries permitted until 2026-02-02 09:13:01.629668184 +0000 UTC m=+1015.485293740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert") pod "infra-operator-controller-manager-79955696d6-k9qvn" (UID: "98ee1d10-d444-4de0-a20c-99258ae4c5da") : secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.173498 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.177526 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxhwf\" (UniqueName: \"kubernetes.io/projected/13d9ccbd-a49d-4b71-9c76-251ad5309b8d-kube-api-access-bxhwf\") pod \"heat-operator-controller-manager-69d6db494d-bw8tf\" (UID: \"13d9ccbd-a49d-4b71-9c76-251ad5309b8d\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.178621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6qtn\" (UniqueName: \"kubernetes.io/projected/ae44cd5d-4fe1-4268-b247-d03075fd37b2-kube-api-access-l6qtn\") pod \"horizon-operator-controller-manager-5fb775575f-42lrb\" (UID: \"ae44cd5d-4fe1-4268-b247-d03075fd37b2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.185562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv7q9\" (UniqueName: \"kubernetes.io/projected/2512bb69-cdd5-4288-a023-08271514a5ed-kube-api-access-rv7q9\") pod \"designate-operator-controller-manager-6d9697b7f4-2v5h2\" (UID: \"2512bb69-cdd5-4288-a023-08271514a5ed\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.197241 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqd2\" (UniqueName: \"kubernetes.io/projected/f48341fa-8eb8-49f2-b177-2c10de4db8fd-kube-api-access-wtqd2\") pod \"glance-operator-controller-manager-8886f4c47-ccx5d\" (UID: \"f48341fa-8eb8-49f2-b177-2c10de4db8fd\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.204801 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.227172 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rx8z\" (UniqueName: \"kubernetes.io/projected/98ee1d10-d444-4de0-a20c-99258ae4c5da-kube-api-access-4rx8z\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.234564 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.242166 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.252376 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4z44\" (UniqueName: \"kubernetes.io/projected/4dd293f6-9311-41de-8c84-66780a5e7a77-kube-api-access-k4z44\") pod \"manila-operator-controller-manager-7dd968899f-99gf8\" (UID: \"4dd293f6-9311-41de-8c84-66780a5e7a77\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.252430 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fps5\" (UniqueName: \"kubernetes.io/projected/e72595d8-8a2a-4b75-8d5d-881209734957-kube-api-access-7fps5\") pod \"ironic-operator-controller-manager-5f4b8bd54d-np86c\" (UID: \"e72595d8-8a2a-4b75-8d5d-881209734957\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.252569 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnl9\" (UniqueName: \"kubernetes.io/projected/00a9f518-1d32-4029-ab03-024c73526aa6-kube-api-access-2vnl9\") pod \"keystone-operator-controller-manager-84f48565d4-bbsfl\" (UID: \"00a9f518-1d32-4029-ab03-024c73526aa6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.252783 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.263186 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.270936 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.271668 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.281626 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xp8px" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.297445 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.304365 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.353765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4z44\" (UniqueName: \"kubernetes.io/projected/4dd293f6-9311-41de-8c84-66780a5e7a77-kube-api-access-k4z44\") pod \"manila-operator-controller-manager-7dd968899f-99gf8\" (UID: \"4dd293f6-9311-41de-8c84-66780a5e7a77\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.353803 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fps5\" (UniqueName: \"kubernetes.io/projected/e72595d8-8a2a-4b75-8d5d-881209734957-kube-api-access-7fps5\") pod \"ironic-operator-controller-manager-5f4b8bd54d-np86c\" (UID: \"e72595d8-8a2a-4b75-8d5d-881209734957\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.353860 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnl9\" (UniqueName: \"kubernetes.io/projected/00a9f518-1d32-4029-ab03-024c73526aa6-kube-api-access-2vnl9\") pod \"keystone-operator-controller-manager-84f48565d4-bbsfl\" (UID: \"00a9f518-1d32-4029-ab03-024c73526aa6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.355538 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.356393 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.359803 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gn4mp" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.405662 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnl9\" (UniqueName: \"kubernetes.io/projected/00a9f518-1d32-4029-ab03-024c73526aa6-kube-api-access-2vnl9\") pod \"keystone-operator-controller-manager-84f48565d4-bbsfl\" (UID: \"00a9f518-1d32-4029-ab03-024c73526aa6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.409124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4z44\" (UniqueName: \"kubernetes.io/projected/4dd293f6-9311-41de-8c84-66780a5e7a77-kube-api-access-k4z44\") pod \"manila-operator-controller-manager-7dd968899f-99gf8\" (UID: \"4dd293f6-9311-41de-8c84-66780a5e7a77\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.419522 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fps5\" (UniqueName: \"kubernetes.io/projected/e72595d8-8a2a-4b75-8d5d-881209734957-kube-api-access-7fps5\") pod \"ironic-operator-controller-manager-5f4b8bd54d-np86c\" (UID: \"e72595d8-8a2a-4b75-8d5d-881209734957\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.419801 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.429828 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.431645 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.439448 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8rkxt" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.458058 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjt7c\" (UniqueName: \"kubernetes.io/projected/365a2cb5-0761-452a-a3e9-b19749919661-kube-api-access-rjt7c\") pod \"mariadb-operator-controller-manager-67bf948998-z6v6h\" (UID: \"365a2cb5-0761-452a-a3e9-b19749919661\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.458113 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkc7n\" (UniqueName: \"kubernetes.io/projected/409368d6-8b01-4aa7-8d28-65c77d3158ab-kube-api-access-qkc7n\") pod \"neutron-operator-controller-manager-585dbc889-8s87h\" (UID: \"409368d6-8b01-4aa7-8d28-65c77d3158ab\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.462426 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.474622 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.479715 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5hpnx" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.486982 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.542144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.561341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjt7c\" (UniqueName: \"kubernetes.io/projected/365a2cb5-0761-452a-a3e9-b19749919661-kube-api-access-rjt7c\") pod \"mariadb-operator-controller-manager-67bf948998-z6v6h\" (UID: \"365a2cb5-0761-452a-a3e9-b19749919661\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.561409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk5zb\" (UniqueName: \"kubernetes.io/projected/bfdd7555-2c9b-4f4f-a25c-289667ea0526-kube-api-access-kk5zb\") pod \"nova-operator-controller-manager-55bff696bd-m5hl6\" (UID: \"bfdd7555-2c9b-4f4f-a25c-289667ea0526\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.561449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkc7n\" (UniqueName: \"kubernetes.io/projected/409368d6-8b01-4aa7-8d28-65c77d3158ab-kube-api-access-qkc7n\") pod \"neutron-operator-controller-manager-585dbc889-8s87h\" (UID: \"409368d6-8b01-4aa7-8d28-65c77d3158ab\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.561775 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.586066 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.597624 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkc7n\" (UniqueName: \"kubernetes.io/projected/409368d6-8b01-4aa7-8d28-65c77d3158ab-kube-api-access-qkc7n\") pod \"neutron-operator-controller-manager-585dbc889-8s87h\" (UID: \"409368d6-8b01-4aa7-8d28-65c77d3158ab\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.611446 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjt7c\" (UniqueName: \"kubernetes.io/projected/365a2cb5-0761-452a-a3e9-b19749919661-kube-api-access-rjt7c\") pod \"mariadb-operator-controller-manager-67bf948998-z6v6h\" (UID: \"365a2cb5-0761-452a-a3e9-b19749919661\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.614922 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.615740 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.619060 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.622853 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-c9v5r" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.627288 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.632072 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.633651 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.641289 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dkjm7" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.657021 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.660775 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.662118 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.662184 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnqd\" (UniqueName: \"kubernetes.io/projected/e29b414a-79dc-49f1-bf42-01bb60a090c5-kube-api-access-dxnqd\") pod \"octavia-operator-controller-manager-6687f8d877-9tgq4\" (UID: \"e29b414a-79dc-49f1-bf42-01bb60a090c5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.662219 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk5zb\" (UniqueName: \"kubernetes.io/projected/bfdd7555-2c9b-4f4f-a25c-289667ea0526-kube-api-access-kk5zb\") pod \"nova-operator-controller-manager-55bff696bd-m5hl6\" (UID: \"bfdd7555-2c9b-4f4f-a25c-289667ea0526\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:01 crc kubenswrapper[4720]: E0202 09:13:01.662520 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:01 crc kubenswrapper[4720]: E0202 09:13:01.662556 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert podName:98ee1d10-d444-4de0-a20c-99258ae4c5da nodeName:}" failed. No retries permitted until 2026-02-02 09:13:02.662543422 +0000 UTC m=+1016.518168978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert") pod "infra-operator-controller-manager-79955696d6-k9qvn" (UID: "98ee1d10-d444-4de0-a20c-99258ae4c5da") : secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.662729 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.667686 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.668432 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.668647 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pcwbb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.669628 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.670593 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w487l" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.677403 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.685105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk5zb\" (UniqueName: \"kubernetes.io/projected/bfdd7555-2c9b-4f4f-a25c-289667ea0526-kube-api-access-kk5zb\") pod \"nova-operator-controller-manager-55bff696bd-m5hl6\" (UID: \"bfdd7555-2c9b-4f4f-a25c-289667ea0526\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.687615 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.695227 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.698113 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.705536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.705917 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.722933 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-j7hnr" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.734460 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.735526 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.735539 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.737048 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ws7bf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.763589 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77fp\" (UniqueName: \"kubernetes.io/projected/e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e-kube-api-access-p77fp\") pod \"swift-operator-controller-manager-68fc8c869-r5wzv\" (UID: \"e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.763633 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnqd\" (UniqueName: \"kubernetes.io/projected/e29b414a-79dc-49f1-bf42-01bb60a090c5-kube-api-access-dxnqd\") pod \"octavia-operator-controller-manager-6687f8d877-9tgq4\" (UID: \"e29b414a-79dc-49f1-bf42-01bb60a090c5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.763659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8jdt\" (UniqueName: \"kubernetes.io/projected/adbc4332-64c2-4e3d-82de-495f217179a5-kube-api-access-s8jdt\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.763695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9c5\" (UniqueName: \"kubernetes.io/projected/dcd3565d-97bb-4e80-8620-5399b7ab6f2a-kube-api-access-df9c5\") pod \"ovn-operator-controller-manager-788c46999f-r2h8r\" (UID: \"dcd3565d-97bb-4e80-8620-5399b7ab6f2a\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.763754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.763771 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gms\" (UniqueName: \"kubernetes.io/projected/0d0b8077-9ce3-47a4-bb23-7b21a8874d1e-kube-api-access-96gms\") pod \"placement-operator-controller-manager-5b964cf4cd-v5vdx\" (UID: \"0d0b8077-9ce3-47a4-bb23-7b21a8874d1e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.765365 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.798394 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-5gncs"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.803336 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.809860 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnqd\" (UniqueName: \"kubernetes.io/projected/e29b414a-79dc-49f1-bf42-01bb60a090c5-kube-api-access-dxnqd\") pod \"octavia-operator-controller-manager-6687f8d877-9tgq4\" (UID: \"e29b414a-79dc-49f1-bf42-01bb60a090c5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.811318 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w2m6m" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.829356 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-5gncs"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.831190 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.876249 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.908969 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.909658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gms\" (UniqueName: \"kubernetes.io/projected/0d0b8077-9ce3-47a4-bb23-7b21a8874d1e-kube-api-access-96gms\") pod \"placement-operator-controller-manager-5b964cf4cd-v5vdx\" (UID: \"0d0b8077-9ce3-47a4-bb23-7b21a8874d1e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.909780 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p77fp\" (UniqueName: \"kubernetes.io/projected/e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e-kube-api-access-p77fp\") pod \"swift-operator-controller-manager-68fc8c869-r5wzv\" (UID: \"e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.909986 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8jdt\" (UniqueName: \"kubernetes.io/projected/adbc4332-64c2-4e3d-82de-495f217179a5-kube-api-access-s8jdt\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.910102 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qg2h\" (UniqueName: \"kubernetes.io/projected/a80257cd-1bb9-4c20-87d3-ab6741c78b57-kube-api-access-5qg2h\") pod \"watcher-operator-controller-manager-564965969-5gncs\" (UID: \"a80257cd-1bb9-4c20-87d3-ab6741c78b57\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.910190 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9c5\" (UniqueName: \"kubernetes.io/projected/dcd3565d-97bb-4e80-8620-5399b7ab6f2a-kube-api-access-df9c5\") pod \"ovn-operator-controller-manager-788c46999f-r2h8r\" (UID: \"dcd3565d-97bb-4e80-8620-5399b7ab6f2a\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.910274 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhcrw\" (UniqueName: \"kubernetes.io/projected/13347ee1-a6a4-435f-a5e5-8c9af5506dd9-kube-api-access-vhcrw\") pod \"telemetry-operator-controller-manager-64b5b76f97-b4svf\" (UID: \"13347ee1-a6a4-435f-a5e5-8c9af5506dd9\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:01 crc kubenswrapper[4720]: E0202 09:13:01.909496 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:01 crc kubenswrapper[4720]: E0202 09:13:01.910509 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert podName:adbc4332-64c2-4e3d-82de-495f217179a5 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:02.410456755 +0000 UTC m=+1016.266082311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" (UID: "adbc4332-64c2-4e3d-82de-495f217179a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.910369 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpwx\" (UniqueName: \"kubernetes.io/projected/12badb48-0f9b-41a2-930d-7573f8485dcf-kube-api-access-xdpwx\") pod \"test-operator-controller-manager-56f8bfcd9f-jc9pb\" (UID: \"12badb48-0f9b-41a2-930d-7573f8485dcf\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.918281 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn"] Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.923654 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.928826 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.930997 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.932951 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tk75d" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.941039 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gms\" (UniqueName: \"kubernetes.io/projected/0d0b8077-9ce3-47a4-bb23-7b21a8874d1e-kube-api-access-96gms\") pod \"placement-operator-controller-manager-5b964cf4cd-v5vdx\" (UID: \"0d0b8077-9ce3-47a4-bb23-7b21a8874d1e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.941735 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77fp\" (UniqueName: \"kubernetes.io/projected/e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e-kube-api-access-p77fp\") pod \"swift-operator-controller-manager-68fc8c869-r5wzv\" (UID: \"e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.947801 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8jdt\" (UniqueName: \"kubernetes.io/projected/adbc4332-64c2-4e3d-82de-495f217179a5-kube-api-access-s8jdt\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:01 crc kubenswrapper[4720]: I0202 09:13:01.952506 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9c5\" (UniqueName: \"kubernetes.io/projected/dcd3565d-97bb-4e80-8620-5399b7ab6f2a-kube-api-access-df9c5\") pod \"ovn-operator-controller-manager-788c46999f-r2h8r\" (UID: \"dcd3565d-97bb-4e80-8620-5399b7ab6f2a\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:01.997001 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.022798 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.022830 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.022862 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qg2h\" (UniqueName: \"kubernetes.io/projected/a80257cd-1bb9-4c20-87d3-ab6741c78b57-kube-api-access-5qg2h\") pod \"watcher-operator-controller-manager-564965969-5gncs\" (UID: \"a80257cd-1bb9-4c20-87d3-ab6741c78b57\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.022902 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbjv\" (UniqueName: \"kubernetes.io/projected/813cdc5b-c252-4b55-8d8a-cf0bfde51059-kube-api-access-6cbjv\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.022928 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhcrw\" (UniqueName: \"kubernetes.io/projected/13347ee1-a6a4-435f-a5e5-8c9af5506dd9-kube-api-access-vhcrw\") pod \"telemetry-operator-controller-manager-64b5b76f97-b4svf\" (UID: \"13347ee1-a6a4-435f-a5e5-8c9af5506dd9\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.022972 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdpwx\" (UniqueName: \"kubernetes.io/projected/12badb48-0f9b-41a2-930d-7573f8485dcf-kube-api-access-xdpwx\") pod \"test-operator-controller-manager-56f8bfcd9f-jc9pb\" (UID: \"12badb48-0f9b-41a2-930d-7573f8485dcf\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.023646 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.051245 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.060326 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdpwx\" (UniqueName: \"kubernetes.io/projected/12badb48-0f9b-41a2-930d-7573f8485dcf-kube-api-access-xdpwx\") pod \"test-operator-controller-manager-56f8bfcd9f-jc9pb\" (UID: \"12badb48-0f9b-41a2-930d-7573f8485dcf\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.060479 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qg2h\" (UniqueName: \"kubernetes.io/projected/a80257cd-1bb9-4c20-87d3-ab6741c78b57-kube-api-access-5qg2h\") pod \"watcher-operator-controller-manager-564965969-5gncs\" (UID: \"a80257cd-1bb9-4c20-87d3-ab6741c78b57\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.079340 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.080416 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.081863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhcrw\" (UniqueName: \"kubernetes.io/projected/13347ee1-a6a4-435f-a5e5-8c9af5506dd9-kube-api-access-vhcrw\") pod \"telemetry-operator-controller-manager-64b5b76f97-b4svf\" (UID: \"13347ee1-a6a4-435f-a5e5-8c9af5506dd9\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.083775 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jk7hc" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.089707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.099199 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.124269 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbjv\" (UniqueName: \"kubernetes.io/projected/813cdc5b-c252-4b55-8d8a-cf0bfde51059-kube-api-access-6cbjv\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.124370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhrh\" (UniqueName: \"kubernetes.io/projected/53e25cfa-ef34-4ee4-826e-767a4f154f15-kube-api-access-fxhrh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-g7wsx\" (UID: \"53e25cfa-ef34-4ee4-826e-767a4f154f15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.124427 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.124446 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.124564 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.124608 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:02.624594775 +0000 UTC m=+1016.480220331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "metrics-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.124972 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.129415 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:02.629227809 +0000 UTC m=+1016.484853365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.165778 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbjv\" (UniqueName: \"kubernetes.io/projected/813cdc5b-c252-4b55-8d8a-cf0bfde51059-kube-api-access-6cbjv\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.173635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.178938 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.200253 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.213467 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.239468 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.243449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhrh\" (UniqueName: \"kubernetes.io/projected/53e25cfa-ef34-4ee4-826e-767a4f154f15-kube-api-access-fxhrh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-g7wsx\" (UID: \"53e25cfa-ef34-4ee4-826e-767a4f154f15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.245995 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.290308 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.299706 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.303667 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhrh\" (UniqueName: \"kubernetes.io/projected/53e25cfa-ef34-4ee4-826e-767a4f154f15-kube-api-access-fxhrh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-g7wsx\" (UID: \"53e25cfa-ef34-4ee4-826e-767a4f154f15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.312266 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" Feb 02 09:13:02 crc kubenswrapper[4720]: W0202 09:13:02.339631 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2512bb69_cdd5_4288_a023_08271514a5ed.slice/crio-08a34e1408579cf5eceb7debe89a7dc6d3354007a1b227710cbee5e0e1ae1b7b WatchSource:0}: Error finding container 08a34e1408579cf5eceb7debe89a7dc6d3354007a1b227710cbee5e0e1ae1b7b: Status 404 returned error can't find the container with id 08a34e1408579cf5eceb7debe89a7dc6d3354007a1b227710cbee5e0e1ae1b7b Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.447551 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.447780 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.447831 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert podName:adbc4332-64c2-4e3d-82de-495f217179a5 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:03.447815844 +0000 UTC m=+1017.303441400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" (UID: "adbc4332-64c2-4e3d-82de-495f217179a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.649829 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.649871 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.650027 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.650074 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:03.650059741 +0000 UTC m=+1017.505685297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "metrics-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.650364 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.650389 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:03.650382369 +0000 UTC m=+1017.506007925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.686988 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb"] Feb 02 09:13:02 crc kubenswrapper[4720]: W0202 09:13:02.697020 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae44cd5d_4fe1_4268_b247_d03075fd37b2.slice/crio-4ac1878ce1fab3beef668f039a0a312523544c038571953db0b7e38c9f266f7a WatchSource:0}: Error finding container 4ac1878ce1fab3beef668f039a0a312523544c038571953db0b7e38c9f266f7a: Status 404 returned error can't find the container with id 4ac1878ce1fab3beef668f039a0a312523544c038571953db0b7e38c9f266f7a Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.698732 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.719035 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.751968 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.752044 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: E0202 09:13:02.752098 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert podName:98ee1d10-d444-4de0-a20c-99258ae4c5da nodeName:}" failed. No retries permitted until 2026-02-02 09:13:04.752077426 +0000 UTC m=+1018.607702982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert") pod "infra-operator-controller-manager-79955696d6-k9qvn" (UID: "98ee1d10-d444-4de0-a20c-99258ae4c5da") : secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.759616 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.787665 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.899848 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h"] Feb 02 09:13:02 crc kubenswrapper[4720]: W0202 09:13:02.900080 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod365a2cb5_0761_452a_a3e9_b19749919661.slice/crio-aaf7b219bf07cf3f8ce2a98eaab20cd72c0c6138023b5397f126d6cbad6b74e3 WatchSource:0}: Error finding container aaf7b219bf07cf3f8ce2a98eaab20cd72c0c6138023b5397f126d6cbad6b74e3: Status 404 returned error can't find the container with id aaf7b219bf07cf3f8ce2a98eaab20cd72c0c6138023b5397f126d6cbad6b74e3 Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.970159 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4"] Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.978465 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r"] Feb 02 09:13:02 crc kubenswrapper[4720]: W0202 09:13:02.980079 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfdd7555_2c9b_4f4f_a25c_289667ea0526.slice/crio-c3a5c005c32960895a1ea3c38d12975e3bec49e54bb69a81bfab85a556aab9d6 WatchSource:0}: Error finding container c3a5c005c32960895a1ea3c38d12975e3bec49e54bb69a81bfab85a556aab9d6: Status 404 returned error can't find the container with id c3a5c005c32960895a1ea3c38d12975e3bec49e54bb69a81bfab85a556aab9d6 Feb 02 09:13:02 crc kubenswrapper[4720]: I0202 09:13:02.990465 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6"] Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.002469 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv"] Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.029926 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p77fp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-r5wzv_openstack-operators(e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.030613 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx"] Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.031508 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" podUID="e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e" Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.033395 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96gms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-v5vdx_openstack-operators(0d0b8077-9ce3-47a4-bb23-7b21a8874d1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.034937 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" podUID="0d0b8077-9ce3-47a4-bb23-7b21a8874d1e" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.060032 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" event={"ID":"e29b414a-79dc-49f1-bf42-01bb60a090c5","Type":"ContainerStarted","Data":"efd60e726337904a70c9c098a2360153fa1a6b8065557310e74068c995d5d6f6"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.064455 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" event={"ID":"e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e","Type":"ContainerStarted","Data":"21c894ab9549b805792e1acbbe627959f4d181d69d8c92cc2fe924189117aea9"} Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.066518 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" podUID="e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.074812 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" event={"ID":"f48341fa-8eb8-49f2-b177-2c10de4db8fd","Type":"ContainerStarted","Data":"2b4a5b670a7dd0481bdba62fc3832e395f9b7f5dfd6d501b3747c170344d1e75"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.083409 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" event={"ID":"74c9a454-0e13-4b29-89d5-cbfd77d7db21","Type":"ContainerStarted","Data":"6faf958748ff11f65970f80be4ec50e3ec693d9a26805b72c7fe507719552393"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.084661 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" event={"ID":"e72595d8-8a2a-4b75-8d5d-881209734957","Type":"ContainerStarted","Data":"951628dba26b155d7a53cf8beb6d78148cfd5178b0877bcf8e126859bff62847"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.087153 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" event={"ID":"0d0b8077-9ce3-47a4-bb23-7b21a8874d1e","Type":"ContainerStarted","Data":"3708293bd5df3b84cefcfac86ac4281b26715d85a0eef9bf929a6b52987088c4"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.091918 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" event={"ID":"365a2cb5-0761-452a-a3e9-b19749919661","Type":"ContainerStarted","Data":"aaf7b219bf07cf3f8ce2a98eaab20cd72c0c6138023b5397f126d6cbad6b74e3"} Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.092515 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" podUID="0d0b8077-9ce3-47a4-bb23-7b21a8874d1e" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.097366 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" event={"ID":"00a9f518-1d32-4029-ab03-024c73526aa6","Type":"ContainerStarted","Data":"7e25c248aad7dfca15c3c8f2e7bfe748b9498cf8363a6a2ee6b8424aa2fe6bb4"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.097870 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf"] Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.098670 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" event={"ID":"dcd3565d-97bb-4e80-8620-5399b7ab6f2a","Type":"ContainerStarted","Data":"a737a4fcc07c899289348a73df12bce76a2e8dcb4265d9fa5c7fc09eb57704a3"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.101018 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" event={"ID":"409368d6-8b01-4aa7-8d28-65c77d3158ab","Type":"ContainerStarted","Data":"ac5d3de741676c98b638d130f7313ec50a8f667fcaebb285b9ecc0da67fc012b"} Feb 02 09:13:03 crc kubenswrapper[4720]: W0202 09:13:03.103807 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12badb48_0f9b_41a2_930d_7573f8485dcf.slice/crio-3ed0f3d78299412d3fec759f98dacceacd4dd4e52930006066a26c233d9d3c4a WatchSource:0}: Error finding container 3ed0f3d78299412d3fec759f98dacceacd4dd4e52930006066a26c233d9d3c4a: Status 404 returned error can't find the container with id 3ed0f3d78299412d3fec759f98dacceacd4dd4e52930006066a26c233d9d3c4a Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.107189 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" event={"ID":"ae44cd5d-4fe1-4268-b247-d03075fd37b2","Type":"ContainerStarted","Data":"4ac1878ce1fab3beef668f039a0a312523544c038571953db0b7e38c9f266f7a"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.109120 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" event={"ID":"bfdd7555-2c9b-4f4f-a25c-289667ea0526","Type":"ContainerStarted","Data":"c3a5c005c32960895a1ea3c38d12975e3bec49e54bb69a81bfab85a556aab9d6"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.112124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" event={"ID":"2512bb69-cdd5-4288-a023-08271514a5ed","Type":"ContainerStarted","Data":"08a34e1408579cf5eceb7debe89a7dc6d3354007a1b227710cbee5e0e1ae1b7b"} Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.115258 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdpwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-jc9pb_openstack-operators(12badb48-0f9b-41a2-930d-7573f8485dcf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.116195 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb"] Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.116216 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" event={"ID":"13d9ccbd-a49d-4b71-9c76-251ad5309b8d","Type":"ContainerStarted","Data":"a6059a0849242a365ff7931e27cdc3b6ffaf639e1ef6b4649944fdc1fd87095a"} Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.116359 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" podUID="12badb48-0f9b-41a2-930d-7573f8485dcf" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.120071 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" event={"ID":"4dd293f6-9311-41de-8c84-66780a5e7a77","Type":"ContainerStarted","Data":"6d49382598aae363b70564def46b95762efbd156381a0289451fd6f30f5a1e4e"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.126065 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" event={"ID":"0773964a-e514-4efc-8e88-ea5e71d4a7eb","Type":"ContainerStarted","Data":"6a87f16dc6c6b6f24b95bdeb5976958332c5718560c09d0b64c9c24f6c8c41fd"} Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.189481 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-5gncs"] Feb 02 09:13:03 crc kubenswrapper[4720]: W0202 09:13:03.192657 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda80257cd_1bb9_4c20_87d3_ab6741c78b57.slice/crio-686b7b76992c5454fa847fe52245d096fc2afa4453f207482a6cf684a8bbb6ad WatchSource:0}: Error finding container 686b7b76992c5454fa847fe52245d096fc2afa4453f207482a6cf684a8bbb6ad: Status 404 returned error can't find the container with id 686b7b76992c5454fa847fe52245d096fc2afa4453f207482a6cf684a8bbb6ad Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.194995 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qg2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-5gncs_openstack-operators(a80257cd-1bb9-4c20-87d3-ab6741c78b57): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.196485 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" podUID="a80257cd-1bb9-4c20-87d3-ab6741c78b57" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.243166 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx"] Feb 02 09:13:03 crc kubenswrapper[4720]: W0202 09:13:03.263296 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e25cfa_ef34_4ee4_826e_767a4f154f15.slice/crio-b4e85d1a64050ea5079a143ba481f91d80c455cba8259fdc76a1f5c446dd26e4 WatchSource:0}: Error finding container b4e85d1a64050ea5079a143ba481f91d80c455cba8259fdc76a1f5c446dd26e4: Status 404 returned error can't find the container with id b4e85d1a64050ea5079a143ba481f91d80c455cba8259fdc76a1f5c446dd26e4 Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.463434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.463563 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.463615 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert podName:adbc4332-64c2-4e3d-82de-495f217179a5 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:05.463600609 +0000 UTC m=+1019.319226165 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" (UID: "adbc4332-64c2-4e3d-82de-495f217179a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.666260 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:03 crc kubenswrapper[4720]: I0202 09:13:03.666310 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.666433 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.666500 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:05.666483002 +0000 UTC m=+1019.522108548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "webhook-server-cert" not found Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.666507 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 09:13:03 crc kubenswrapper[4720]: E0202 09:13:03.666565 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:05.666550064 +0000 UTC m=+1019.522175730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "metrics-server-cert" not found Feb 02 09:13:04 crc kubenswrapper[4720]: I0202 09:13:04.134281 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" event={"ID":"12badb48-0f9b-41a2-930d-7573f8485dcf","Type":"ContainerStarted","Data":"3ed0f3d78299412d3fec759f98dacceacd4dd4e52930006066a26c233d9d3c4a"} Feb 02 09:13:04 crc kubenswrapper[4720]: I0202 09:13:04.136846 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" event={"ID":"53e25cfa-ef34-4ee4-826e-767a4f154f15","Type":"ContainerStarted","Data":"b4e85d1a64050ea5079a143ba481f91d80c455cba8259fdc76a1f5c446dd26e4"} Feb 02 09:13:04 crc kubenswrapper[4720]: E0202 09:13:04.137325 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" podUID="12badb48-0f9b-41a2-930d-7573f8485dcf" Feb 02 09:13:04 crc kubenswrapper[4720]: I0202 09:13:04.143064 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" event={"ID":"a80257cd-1bb9-4c20-87d3-ab6741c78b57","Type":"ContainerStarted","Data":"686b7b76992c5454fa847fe52245d096fc2afa4453f207482a6cf684a8bbb6ad"} Feb 02 09:13:04 crc kubenswrapper[4720]: E0202 09:13:04.144140 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" podUID="a80257cd-1bb9-4c20-87d3-ab6741c78b57" Feb 02 09:13:04 crc kubenswrapper[4720]: I0202 09:13:04.144996 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" event={"ID":"13347ee1-a6a4-435f-a5e5-8c9af5506dd9","Type":"ContainerStarted","Data":"c4da7c817f0c75fc4ff548bcb9e86de0f12e28096df4b58f3ad9957b4c83f6c4"} Feb 02 09:13:04 crc kubenswrapper[4720]: E0202 09:13:04.148416 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" podUID="0d0b8077-9ce3-47a4-bb23-7b21a8874d1e" Feb 02 09:13:04 crc kubenswrapper[4720]: E0202 09:13:04.150309 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" podUID="e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e" Feb 02 09:13:04 crc kubenswrapper[4720]: I0202 09:13:04.780169 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:04 crc kubenswrapper[4720]: E0202 09:13:04.780325 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:04 crc kubenswrapper[4720]: E0202 09:13:04.780393 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert podName:98ee1d10-d444-4de0-a20c-99258ae4c5da nodeName:}" failed. No retries permitted until 2026-02-02 09:13:08.780377247 +0000 UTC m=+1022.636002803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert") pod "infra-operator-controller-manager-79955696d6-k9qvn" (UID: "98ee1d10-d444-4de0-a20c-99258ae4c5da") : secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.155174 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" podUID="a80257cd-1bb9-4c20-87d3-ab6741c78b57" Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.156198 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" podUID="12badb48-0f9b-41a2-930d-7573f8485dcf" Feb 02 09:13:05 crc kubenswrapper[4720]: I0202 09:13:05.489517 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.489718 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.489846 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert podName:adbc4332-64c2-4e3d-82de-495f217179a5 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:09.48981585 +0000 UTC m=+1023.345441436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" (UID: "adbc4332-64c2-4e3d-82de-495f217179a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:05 crc kubenswrapper[4720]: I0202 09:13:05.692138 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:05 crc kubenswrapper[4720]: I0202 09:13:05.692212 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.692347 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.692445 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:09.692403084 +0000 UTC m=+1023.548028640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "webhook-server-cert" not found Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.693167 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 09:13:05 crc kubenswrapper[4720]: E0202 09:13:05.693340 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:09.693304786 +0000 UTC m=+1023.548930422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "metrics-server-cert" not found Feb 02 09:13:08 crc kubenswrapper[4720]: I0202 09:13:08.846149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:08 crc kubenswrapper[4720]: E0202 09:13:08.846363 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:08 crc kubenswrapper[4720]: E0202 09:13:08.846524 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert podName:98ee1d10-d444-4de0-a20c-99258ae4c5da nodeName:}" failed. No retries permitted until 2026-02-02 09:13:16.846506033 +0000 UTC m=+1030.702131589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert") pod "infra-operator-controller-manager-79955696d6-k9qvn" (UID: "98ee1d10-d444-4de0-a20c-99258ae4c5da") : secret "infra-operator-webhook-server-cert" not found Feb 02 09:13:09 crc kubenswrapper[4720]: I0202 09:13:09.557500 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:09 crc kubenswrapper[4720]: E0202 09:13:09.557679 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:09 crc kubenswrapper[4720]: E0202 09:13:09.557988 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert podName:adbc4332-64c2-4e3d-82de-495f217179a5 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:17.557967955 +0000 UTC m=+1031.413593521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" (UID: "adbc4332-64c2-4e3d-82de-495f217179a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 09:13:09 crc kubenswrapper[4720]: I0202 09:13:09.761185 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:09 crc kubenswrapper[4720]: I0202 09:13:09.761335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:09 crc kubenswrapper[4720]: E0202 09:13:09.761477 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 09:13:09 crc kubenswrapper[4720]: E0202 09:13:09.761567 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 09:13:09 crc kubenswrapper[4720]: E0202 09:13:09.762161 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:17.761580466 +0000 UTC m=+1031.617206052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "webhook-server-cert" not found Feb 02 09:13:09 crc kubenswrapper[4720]: E0202 09:13:09.762198 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs podName:813cdc5b-c252-4b55-8d8a-cf0bfde51059 nodeName:}" failed. No retries permitted until 2026-02-02 09:13:17.76218108 +0000 UTC m=+1031.617806666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs") pod "openstack-operator-controller-manager-75d6c7dbc6-wwpdn" (UID: "813cdc5b-c252-4b55-8d8a-cf0bfde51059") : secret "metrics-server-cert" not found Feb 02 09:13:14 crc kubenswrapper[4720]: E0202 09:13:14.988150 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Feb 02 09:13:14 crc kubenswrapper[4720]: E0202 09:13:14.989074 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dxnqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-9tgq4_openstack-operators(e29b414a-79dc-49f1-bf42-01bb60a090c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:13:14 crc kubenswrapper[4720]: E0202 09:13:14.990458 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" podUID="e29b414a-79dc-49f1-bf42-01bb60a090c5" Feb 02 09:13:15 crc kubenswrapper[4720]: E0202 09:13:15.225715 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" podUID="e29b414a-79dc-49f1-bf42-01bb60a090c5" Feb 02 09:13:15 crc kubenswrapper[4720]: E0202 09:13:15.583264 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 02 09:13:15 crc kubenswrapper[4720]: E0202 09:13:15.583588 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rv7q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-2v5h2_openstack-operators(2512bb69-cdd5-4288-a023-08271514a5ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:13:15 crc kubenswrapper[4720]: E0202 09:13:15.585450 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" podUID="2512bb69-cdd5-4288-a023-08271514a5ed" Feb 02 09:13:16 crc kubenswrapper[4720]: E0202 09:13:16.186553 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Feb 02 09:13:16 crc kubenswrapper[4720]: E0202 09:13:16.186916 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vhcrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-b4svf_openstack-operators(13347ee1-a6a4-435f-a5e5-8c9af5506dd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:13:16 crc kubenswrapper[4720]: E0202 09:13:16.190848 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" podUID="13347ee1-a6a4-435f-a5e5-8c9af5506dd9" Feb 02 09:13:16 crc kubenswrapper[4720]: E0202 09:13:16.233927 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" podUID="2512bb69-cdd5-4288-a023-08271514a5ed" Feb 02 09:13:16 crc kubenswrapper[4720]: E0202 09:13:16.241896 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" podUID="13347ee1-a6a4-435f-a5e5-8c9af5506dd9" Feb 02 09:13:16 crc kubenswrapper[4720]: I0202 09:13:16.871685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:16 crc kubenswrapper[4720]: I0202 09:13:16.897254 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ee1d10-d444-4de0-a20c-99258ae4c5da-cert\") pod \"infra-operator-controller-manager-79955696d6-k9qvn\" (UID: \"98ee1d10-d444-4de0-a20c-99258ae4c5da\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:16 crc kubenswrapper[4720]: I0202 09:13:16.925619 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s5w5z" Feb 02 09:13:16 crc kubenswrapper[4720]: I0202 09:13:16.934669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.583847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.587090 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adbc4332-64c2-4e3d-82de-495f217179a5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf\" (UID: \"adbc4332-64c2-4e3d-82de-495f217179a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.665164 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pcwbb" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.673057 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.786821 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.786931 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.792565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-metrics-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.793420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/813cdc5b-c252-4b55-8d8a-cf0bfde51059-webhook-certs\") pod \"openstack-operator-controller-manager-75d6c7dbc6-wwpdn\" (UID: \"813cdc5b-c252-4b55-8d8a-cf0bfde51059\") " pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.902308 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.902392 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.938585 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tk75d" Feb 02 09:13:17 crc kubenswrapper[4720]: I0202 09:13:17.946943 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:21 crc kubenswrapper[4720]: E0202 09:13:21.409506 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 09:13:21 crc kubenswrapper[4720]: E0202 09:13:21.410373 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vnl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-bbsfl_openstack-operators(00a9f518-1d32-4029-ab03-024c73526aa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:13:21 crc kubenswrapper[4720]: E0202 09:13:21.411606 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" podUID="00a9f518-1d32-4029-ab03-024c73526aa6" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.036725 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.036917 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk5zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-m5hl6_openstack-operators(bfdd7555-2c9b-4f4f-a25c-289667ea0526): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.038329 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" podUID="bfdd7555-2c9b-4f4f-a25c-289667ea0526" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.282032 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" podUID="00a9f518-1d32-4029-ab03-024c73526aa6" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.283064 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" podUID="bfdd7555-2c9b-4f4f-a25c-289667ea0526" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.891684 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.891865 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxhrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-g7wsx_openstack-operators(53e25cfa-ef34-4ee4-826e-767a4f154f15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:13:22 crc kubenswrapper[4720]: E0202 09:13:22.893084 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" podUID="53e25cfa-ef34-4ee4-826e-767a4f154f15" Feb 02 09:13:23 crc kubenswrapper[4720]: E0202 09:13:23.291319 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" podUID="53e25cfa-ef34-4ee4-826e-767a4f154f15" Feb 02 09:13:25 crc kubenswrapper[4720]: I0202 09:13:25.894249 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf"] Feb 02 09:13:25 crc kubenswrapper[4720]: I0202 09:13:25.990358 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn"] Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.018273 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn"] Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.306173 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" event={"ID":"f48341fa-8eb8-49f2-b177-2c10de4db8fd","Type":"ContainerStarted","Data":"37dc917f565b27fdbd39ffa24dbb93daa13e4db20fa121f588882d0d44c15d97"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.306538 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.308095 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" event={"ID":"13d9ccbd-a49d-4b71-9c76-251ad5309b8d","Type":"ContainerStarted","Data":"1f59bef06d4c7f565da1260543863c854b210c2e22fd712c49d50bef27357fca"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.308228 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.309824 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" event={"ID":"4dd293f6-9311-41de-8c84-66780a5e7a77","Type":"ContainerStarted","Data":"e6327dc007c41ac2fcb456ecaa185cb6b30ff617e27463130d2a93c5be99892c"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.310000 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.311372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" event={"ID":"409368d6-8b01-4aa7-8d28-65c77d3158ab","Type":"ContainerStarted","Data":"a48532a9b61bb110a80dbbf84145b450168999391d1f83676ccec2936f3ecb8f"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.311713 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.313894 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" event={"ID":"ae44cd5d-4fe1-4268-b247-d03075fd37b2","Type":"ContainerStarted","Data":"971deaafac9d5ffc47bb298f987e866a1178eb013a09ff5db23f04a1567d8a6f"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.314349 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.315967 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" event={"ID":"e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e","Type":"ContainerStarted","Data":"ba85b63cf9cfe09267949670cc2d72780c9ffc3ff146a7b5f9ba5168a66e2053"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.316212 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.318116 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" event={"ID":"98ee1d10-d444-4de0-a20c-99258ae4c5da","Type":"ContainerStarted","Data":"eb2ec88206f32a816e730a0156564a5d604e1b96b9f28e645fe479db69c44df8"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.319269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" event={"ID":"813cdc5b-c252-4b55-8d8a-cf0bfde51059","Type":"ContainerStarted","Data":"7269db84abbf803aff8c74b3a91960aee58eaa0048761d583355376aa813d95d"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.319306 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" event={"ID":"813cdc5b-c252-4b55-8d8a-cf0bfde51059","Type":"ContainerStarted","Data":"7119eb2ee7c14f356dc64802b9e2d4a358f2311f5b4107e6efdc82b9060d7743"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.319795 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.320744 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" event={"ID":"a80257cd-1bb9-4c20-87d3-ab6741c78b57","Type":"ContainerStarted","Data":"de78850f886f98db7204b1d4084660eff9542ac0953e4b20fd908cec93edf972"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.321087 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.325393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" event={"ID":"365a2cb5-0761-452a-a3e9-b19749919661","Type":"ContainerStarted","Data":"479b812b066ae75b30634446568feb0b801b317ccd92c50b1538255c32af593b"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.326072 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.327154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" event={"ID":"74c9a454-0e13-4b29-89d5-cbfd77d7db21","Type":"ContainerStarted","Data":"c8e1293aaa0b8124639b2b08513566c2de2fdb8f3ee2a3862dea51324c29f281"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.327482 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.328505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" event={"ID":"e72595d8-8a2a-4b75-8d5d-881209734957","Type":"ContainerStarted","Data":"036ed5f97f2afae418243ffdbc4cdb742b12fde4edecc12be412e540a59e93e9"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.328839 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.329845 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" event={"ID":"12badb48-0f9b-41a2-930d-7573f8485dcf","Type":"ContainerStarted","Data":"9c233e30e3610680870e90c7668dfba6675b61ac896a5d197f60d7c6294c543c"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.330169 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.331038 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" event={"ID":"dcd3565d-97bb-4e80-8620-5399b7ab6f2a","Type":"ContainerStarted","Data":"5ad43979bc7b1ba9e1d24f98723fc1f8be76fe3dc0dbca7a7ade48a01da8c7e5"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.331343 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.332198 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" event={"ID":"0d0b8077-9ce3-47a4-bb23-7b21a8874d1e","Type":"ContainerStarted","Data":"f2f8e8137185367df45871bc7497077928fd04d4bf3bb350a2ab31fe62d84066"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.332513 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.333382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" event={"ID":"0773964a-e514-4efc-8e88-ea5e71d4a7eb","Type":"ContainerStarted","Data":"733131e1da8d8e026a9ddebaa46474febcc92b30f0d7ec9e4afb02c57293696e"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.333697 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.334450 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" event={"ID":"adbc4332-64c2-4e3d-82de-495f217179a5","Type":"ContainerStarted","Data":"f9f1fd45a876adee1adba86c39bcffbd51cf4586f8072a4ca311c9613add99e0"} Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.351364 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" podStartSLOduration=7.739844385 podStartE2EDuration="26.351348798s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.333025404 +0000 UTC m=+1016.188650960" lastFinishedPulling="2026-02-02 09:13:20.944529817 +0000 UTC m=+1034.800155373" observedRunningTime="2026-02-02 09:13:26.350446096 +0000 UTC m=+1040.206071652" watchObservedRunningTime="2026-02-02 09:13:26.351348798 +0000 UTC m=+1040.206974354" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.390553 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" podStartSLOduration=5.690760782 podStartE2EDuration="26.390537314s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.711763613 +0000 UTC m=+1016.567389169" lastFinishedPulling="2026-02-02 09:13:23.411540125 +0000 UTC m=+1037.267165701" observedRunningTime="2026-02-02 09:13:26.384769342 +0000 UTC m=+1040.240394888" watchObservedRunningTime="2026-02-02 09:13:26.390537314 +0000 UTC m=+1040.246162870" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.420736 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" podStartSLOduration=5.805146983 podStartE2EDuration="26.420721669s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.818398152 +0000 UTC m=+1016.674023698" lastFinishedPulling="2026-02-02 09:13:23.433972808 +0000 UTC m=+1037.289598384" observedRunningTime="2026-02-02 09:13:26.417314774 +0000 UTC m=+1040.272940330" watchObservedRunningTime="2026-02-02 09:13:26.420721669 +0000 UTC m=+1040.276347225" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.434434 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" podStartSLOduration=5.739459393 podStartE2EDuration="26.434413587s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.716628852 +0000 UTC m=+1016.572254408" lastFinishedPulling="2026-02-02 09:13:23.411583006 +0000 UTC m=+1037.267208602" observedRunningTime="2026-02-02 09:13:26.431929365 +0000 UTC m=+1040.287554911" watchObservedRunningTime="2026-02-02 09:13:26.434413587 +0000 UTC m=+1040.290039143" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.471819 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" podStartSLOduration=3.056262742 podStartE2EDuration="25.471804938s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:03.03328429 +0000 UTC m=+1016.888909846" lastFinishedPulling="2026-02-02 09:13:25.448826476 +0000 UTC m=+1039.304452042" observedRunningTime="2026-02-02 09:13:26.46982122 +0000 UTC m=+1040.325446776" watchObservedRunningTime="2026-02-02 09:13:26.471804938 +0000 UTC m=+1040.327430494" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.528265 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" podStartSLOduration=5.714469377 podStartE2EDuration="26.52824809s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.043131646 +0000 UTC m=+1015.898757192" lastFinishedPulling="2026-02-02 09:13:22.856910349 +0000 UTC m=+1036.712535905" observedRunningTime="2026-02-02 09:13:26.524762654 +0000 UTC m=+1040.380388210" watchObservedRunningTime="2026-02-02 09:13:26.52824809 +0000 UTC m=+1040.383873646" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.529196 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" podStartSLOduration=5.111021674 podStartE2EDuration="25.529187713s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.993284014 +0000 UTC m=+1016.848909570" lastFinishedPulling="2026-02-02 09:13:23.411450053 +0000 UTC m=+1037.267075609" observedRunningTime="2026-02-02 09:13:26.500580738 +0000 UTC m=+1040.356206294" watchObservedRunningTime="2026-02-02 09:13:26.529187713 +0000 UTC m=+1040.384813269" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.565996 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" podStartSLOduration=3.295446182 podStartE2EDuration="25.565982061s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:03.194859354 +0000 UTC m=+1017.050484910" lastFinishedPulling="2026-02-02 09:13:25.465395233 +0000 UTC m=+1039.321020789" observedRunningTime="2026-02-02 09:13:26.563619152 +0000 UTC m=+1040.419244708" watchObservedRunningTime="2026-02-02 09:13:26.565982061 +0000 UTC m=+1040.421607607" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.645719 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" podStartSLOduration=3.164878011 podStartE2EDuration="25.645701826s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.902397433 +0000 UTC m=+1016.758022989" lastFinishedPulling="2026-02-02 09:13:25.383221248 +0000 UTC m=+1039.238846804" observedRunningTime="2026-02-02 09:13:26.627950569 +0000 UTC m=+1040.483576125" watchObservedRunningTime="2026-02-02 09:13:26.645701826 +0000 UTC m=+1040.501327382" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.648763 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" podStartSLOduration=3.213155823 podStartE2EDuration="25.648737461s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:03.029764634 +0000 UTC m=+1016.885390190" lastFinishedPulling="2026-02-02 09:13:25.465346272 +0000 UTC m=+1039.320971828" observedRunningTime="2026-02-02 09:13:26.647185313 +0000 UTC m=+1040.502810869" watchObservedRunningTime="2026-02-02 09:13:26.648737461 +0000 UTC m=+1040.504363017" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.691674 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" podStartSLOduration=25.691657699 podStartE2EDuration="25.691657699s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:13:26.69047921 +0000 UTC m=+1040.546104766" watchObservedRunningTime="2026-02-02 09:13:26.691657699 +0000 UTC m=+1040.547283255" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.718278 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" podStartSLOduration=5.9047906900000005 podStartE2EDuration="26.718261685s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.043396313 +0000 UTC m=+1015.899021869" lastFinishedPulling="2026-02-02 09:13:22.856867308 +0000 UTC m=+1036.712492864" observedRunningTime="2026-02-02 09:13:26.71438977 +0000 UTC m=+1040.570015336" watchObservedRunningTime="2026-02-02 09:13:26.718261685 +0000 UTC m=+1040.573887241" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.739359 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" podStartSLOduration=4.294818202 podStartE2EDuration="26.739340364s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.267741684 +0000 UTC m=+1016.123367240" lastFinishedPulling="2026-02-02 09:13:24.712263846 +0000 UTC m=+1038.567889402" observedRunningTime="2026-02-02 09:13:26.738848523 +0000 UTC m=+1040.594474079" watchObservedRunningTime="2026-02-02 09:13:26.739340364 +0000 UTC m=+1040.594965920" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.791317 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" podStartSLOduration=5.734933767 podStartE2EDuration="25.791298205s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.80050172 +0000 UTC m=+1016.656127276" lastFinishedPulling="2026-02-02 09:13:22.856866158 +0000 UTC m=+1036.712491714" observedRunningTime="2026-02-02 09:13:26.762200249 +0000 UTC m=+1040.617825805" watchObservedRunningTime="2026-02-02 09:13:26.791298205 +0000 UTC m=+1040.646923761" Feb 02 09:13:26 crc kubenswrapper[4720]: I0202 09:13:26.953371 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" podStartSLOduration=3.62273471 podStartE2EDuration="25.953353771s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:03.115153869 +0000 UTC m=+1016.970779425" lastFinishedPulling="2026-02-02 09:13:25.44577293 +0000 UTC m=+1039.301398486" observedRunningTime="2026-02-02 09:13:26.79022801 +0000 UTC m=+1040.645853566" watchObservedRunningTime="2026-02-02 09:13:26.953353771 +0000 UTC m=+1040.808979327" Feb 02 09:13:28 crc kubenswrapper[4720]: I0202 09:13:28.355406 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" event={"ID":"e29b414a-79dc-49f1-bf42-01bb60a090c5","Type":"ContainerStarted","Data":"8a9a9fca55e0205a4c337d9a4561ed097006d27d1d824cfbde17b1c123a4f478"} Feb 02 09:13:28 crc kubenswrapper[4720]: I0202 09:13:28.359019 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:28 crc kubenswrapper[4720]: I0202 09:13:28.378495 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" podStartSLOduration=2.950501965 podStartE2EDuration="27.37848114s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.984423735 +0000 UTC m=+1016.840049291" lastFinishedPulling="2026-02-02 09:13:27.41240291 +0000 UTC m=+1041.268028466" observedRunningTime="2026-02-02 09:13:28.373407055 +0000 UTC m=+1042.229032621" watchObservedRunningTime="2026-02-02 09:13:28.37848114 +0000 UTC m=+1042.234106696" Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.371721 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" event={"ID":"2512bb69-cdd5-4288-a023-08271514a5ed","Type":"ContainerStarted","Data":"94be77c1f8212e3f02e9dea3c079daf7d46e3c8a438efecf07fbfbefd9b7323a"} Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.372543 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.373276 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" event={"ID":"adbc4332-64c2-4e3d-82de-495f217179a5","Type":"ContainerStarted","Data":"e7e732750d26810e9a0fac7b3b82dbba40e6d0ba84a83806765740866b33c8b3"} Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.373737 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.375154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" event={"ID":"98ee1d10-d444-4de0-a20c-99258ae4c5da","Type":"ContainerStarted","Data":"0060d87ddfb8000ae2ec8b966510d5732cc7528634064006370f5bafd2d7f72e"} Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.375359 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.410212 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" podStartSLOduration=3.393678306 podStartE2EDuration="30.410191184s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.357694153 +0000 UTC m=+1016.213319699" lastFinishedPulling="2026-02-02 09:13:29.374207021 +0000 UTC m=+1043.229832577" observedRunningTime="2026-02-02 09:13:30.390988871 +0000 UTC m=+1044.246614427" watchObservedRunningTime="2026-02-02 09:13:30.410191184 +0000 UTC m=+1044.265816740" Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.412247 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" podStartSLOduration=27.080824495 podStartE2EDuration="30.412227325s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:26.041777465 +0000 UTC m=+1039.897403021" lastFinishedPulling="2026-02-02 09:13:29.373180285 +0000 UTC m=+1043.228805851" observedRunningTime="2026-02-02 09:13:30.409937498 +0000 UTC m=+1044.265563054" watchObservedRunningTime="2026-02-02 09:13:30.412227325 +0000 UTC m=+1044.267852881" Feb 02 09:13:30 crc kubenswrapper[4720]: I0202 09:13:30.456213 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" podStartSLOduration=26.02955381 podStartE2EDuration="29.456193909s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:25.945197844 +0000 UTC m=+1039.800823400" lastFinishedPulling="2026-02-02 09:13:29.371837933 +0000 UTC m=+1043.227463499" observedRunningTime="2026-02-02 09:13:30.444296035 +0000 UTC m=+1044.299921601" watchObservedRunningTime="2026-02-02 09:13:30.456193909 +0000 UTC m=+1044.311819465" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.178062 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-ts2bs" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.239915 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-txm2d" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.253272 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-ccx5d" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.266452 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bw8tf" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.318165 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-42lrb" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.545140 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-99gf8" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.636205 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-z6v6h" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.714677 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-np86c" Feb 02 09:13:31 crc kubenswrapper[4720]: I0202 09:13:31.738545 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8s87h" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.000264 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-r2h8r" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.027201 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-v5vdx" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.094188 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r5wzv" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.242801 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-jc9pb" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.248684 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-5gncs" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.389292 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" event={"ID":"13347ee1-a6a4-435f-a5e5-8c9af5506dd9","Type":"ContainerStarted","Data":"ad916044c4934ac86758b50410b078c56aa7eec308d81b09137fc5432ab8c1ff"} Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.389513 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:32 crc kubenswrapper[4720]: I0202 09:13:32.411898 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" podStartSLOduration=3.084716825 podStartE2EDuration="31.411862079s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:03.107875009 +0000 UTC m=+1016.963500565" lastFinishedPulling="2026-02-02 09:13:31.435020263 +0000 UTC m=+1045.290645819" observedRunningTime="2026-02-02 09:13:32.40500978 +0000 UTC m=+1046.260635346" watchObservedRunningTime="2026-02-02 09:13:32.411862079 +0000 UTC m=+1046.267487655" Feb 02 09:13:35 crc kubenswrapper[4720]: I0202 09:13:35.425859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" event={"ID":"00a9f518-1d32-4029-ab03-024c73526aa6","Type":"ContainerStarted","Data":"4c0f5e313cb8b35f66edfed4fe2fd15dc275dffb78281dc1d7c9776738597983"} Feb 02 09:13:35 crc kubenswrapper[4720]: I0202 09:13:35.426788 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:35 crc kubenswrapper[4720]: I0202 09:13:35.447826 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" podStartSLOduration=3.767836091 podStartE2EDuration="35.447802364s" podCreationTimestamp="2026-02-02 09:13:00 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.750806475 +0000 UTC m=+1016.606432031" lastFinishedPulling="2026-02-02 09:13:34.430772708 +0000 UTC m=+1048.286398304" observedRunningTime="2026-02-02 09:13:35.445119147 +0000 UTC m=+1049.300744743" watchObservedRunningTime="2026-02-02 09:13:35.447802364 +0000 UTC m=+1049.303427930" Feb 02 09:13:36 crc kubenswrapper[4720]: I0202 09:13:36.941021 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-k9qvn" Feb 02 09:13:37 crc kubenswrapper[4720]: I0202 09:13:37.441652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" event={"ID":"bfdd7555-2c9b-4f4f-a25c-289667ea0526","Type":"ContainerStarted","Data":"2090e3e94e018f023be2ed732f804ac398f720c870992a2deafbb3d9a62a0cda"} Feb 02 09:13:37 crc kubenswrapper[4720]: I0202 09:13:37.442499 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:37 crc kubenswrapper[4720]: I0202 09:13:37.467139 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" podStartSLOduration=3.019413475 podStartE2EDuration="36.466848086s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:02.986685191 +0000 UTC m=+1016.842310747" lastFinishedPulling="2026-02-02 09:13:36.434119792 +0000 UTC m=+1050.289745358" observedRunningTime="2026-02-02 09:13:37.458570662 +0000 UTC m=+1051.314196258" watchObservedRunningTime="2026-02-02 09:13:37.466848086 +0000 UTC m=+1051.322473652" Feb 02 09:13:37 crc kubenswrapper[4720]: I0202 09:13:37.681302 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf" Feb 02 09:13:37 crc kubenswrapper[4720]: I0202 09:13:37.955023 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75d6c7dbc6-wwpdn" Feb 02 09:13:38 crc kubenswrapper[4720]: I0202 09:13:38.448700 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" event={"ID":"53e25cfa-ef34-4ee4-826e-767a4f154f15","Type":"ContainerStarted","Data":"ad479c637d6bb2de4060bbec45647b981fefef715d8b00039eca5efc8d8c1e1a"} Feb 02 09:13:38 crc kubenswrapper[4720]: I0202 09:13:38.468688 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g7wsx" podStartSLOduration=3.302367591 podStartE2EDuration="37.468667048s" podCreationTimestamp="2026-02-02 09:13:01 +0000 UTC" firstStartedPulling="2026-02-02 09:13:03.266448479 +0000 UTC m=+1017.122074025" lastFinishedPulling="2026-02-02 09:13:37.432747926 +0000 UTC m=+1051.288373482" observedRunningTime="2026-02-02 09:13:38.464872774 +0000 UTC m=+1052.320498350" watchObservedRunningTime="2026-02-02 09:13:38.468667048 +0000 UTC m=+1052.324292614" Feb 02 09:13:41 crc kubenswrapper[4720]: I0202 09:13:41.258747 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2v5h2" Feb 02 09:13:41 crc kubenswrapper[4720]: I0202 09:13:41.423242 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bbsfl" Feb 02 09:13:41 crc kubenswrapper[4720]: I0202 09:13:41.834658 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-m5hl6" Feb 02 09:13:41 crc kubenswrapper[4720]: I0202 09:13:41.878831 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9tgq4" Feb 02 09:13:42 crc kubenswrapper[4720]: I0202 09:13:42.178366 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4svf" Feb 02 09:13:47 crc kubenswrapper[4720]: I0202 09:13:47.902172 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:13:47 crc kubenswrapper[4720]: I0202 09:13:47.904422 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:13:47 crc kubenswrapper[4720]: I0202 09:13:47.904501 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:13:47 crc kubenswrapper[4720]: I0202 09:13:47.905405 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33b2587eeea210938842b756c82dc97d447412bb2884bc249a32550e7a5523ff"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:13:47 crc kubenswrapper[4720]: I0202 09:13:47.905522 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://33b2587eeea210938842b756c82dc97d447412bb2884bc249a32550e7a5523ff" gracePeriod=600 Feb 02 09:13:48 crc kubenswrapper[4720]: I0202 09:13:48.540746 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="33b2587eeea210938842b756c82dc97d447412bb2884bc249a32550e7a5523ff" exitCode=0 Feb 02 09:13:48 crc kubenswrapper[4720]: I0202 09:13:48.540831 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"33b2587eeea210938842b756c82dc97d447412bb2884bc249a32550e7a5523ff"} Feb 02 09:13:48 crc kubenswrapper[4720]: I0202 09:13:48.540968 4720 scope.go:117] "RemoveContainer" containerID="08c9c2a3c22cfda2f1813f2d513a474efb4e6630c2e4cb574188e46dafd49a3d" Feb 02 09:13:51 crc kubenswrapper[4720]: I0202 09:13:51.573288 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0"} Feb 02 09:13:58 crc kubenswrapper[4720]: I0202 09:13:58.919738 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b4dpx"] Feb 02 09:13:58 crc kubenswrapper[4720]: I0202 09:13:58.926762 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:58 crc kubenswrapper[4720]: I0202 09:13:58.930504 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 09:13:58 crc kubenswrapper[4720]: I0202 09:13:58.933671 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rcddb" Feb 02 09:13:58 crc kubenswrapper[4720]: I0202 09:13:58.938111 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b4dpx"] Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.009917 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hk9c6"] Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.011243 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.013336 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.021470 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59796c6-17b1-494a-aef6-e230e60af54a-config\") pod \"dnsmasq-dns-675f4bcbfc-b4dpx\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.021532 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnx8\" (UniqueName: \"kubernetes.io/projected/b59796c6-17b1-494a-aef6-e230e60af54a-kube-api-access-fxnx8\") pod \"dnsmasq-dns-675f4bcbfc-b4dpx\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.027757 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hk9c6"] Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.123083 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g62p\" (UniqueName: \"kubernetes.io/projected/8f04ee95-76b7-461f-8b63-3724206b1296-kube-api-access-6g62p\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.123157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59796c6-17b1-494a-aef6-e230e60af54a-config\") pod \"dnsmasq-dns-675f4bcbfc-b4dpx\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.123231 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.123251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-config\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.124197 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnx8\" (UniqueName: \"kubernetes.io/projected/b59796c6-17b1-494a-aef6-e230e60af54a-kube-api-access-fxnx8\") pod \"dnsmasq-dns-675f4bcbfc-b4dpx\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.124270 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59796c6-17b1-494a-aef6-e230e60af54a-config\") pod \"dnsmasq-dns-675f4bcbfc-b4dpx\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.142576 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnx8\" (UniqueName: \"kubernetes.io/projected/b59796c6-17b1-494a-aef6-e230e60af54a-kube-api-access-fxnx8\") pod \"dnsmasq-dns-675f4bcbfc-b4dpx\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.225386 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g62p\" (UniqueName: \"kubernetes.io/projected/8f04ee95-76b7-461f-8b63-3724206b1296-kube-api-access-6g62p\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.225512 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.225572 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-config\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.227511 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.227575 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-config\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.245459 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g62p\" (UniqueName: \"kubernetes.io/projected/8f04ee95-76b7-461f-8b63-3724206b1296-kube-api-access-6g62p\") pod \"dnsmasq-dns-78dd6ddcc-hk9c6\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.256208 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.328929 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.517090 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b4dpx"] Feb 02 09:13:59 crc kubenswrapper[4720]: W0202 09:13:59.528750 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59796c6_17b1_494a_aef6_e230e60af54a.slice/crio-bd2bfca0fe482d0fb3ed223a52b0216a77181d5167ce5035947b044200afa1c2 WatchSource:0}: Error finding container bd2bfca0fe482d0fb3ed223a52b0216a77181d5167ce5035947b044200afa1c2: Status 404 returned error can't find the container with id bd2bfca0fe482d0fb3ed223a52b0216a77181d5167ce5035947b044200afa1c2 Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.591036 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hk9c6"] Feb 02 09:13:59 crc kubenswrapper[4720]: W0202 09:13:59.594228 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f04ee95_76b7_461f_8b63_3724206b1296.slice/crio-3dc8b107c09eac172a7094c998112ee87c6b4bfeaf139f15f7f1b6d2376ca389 WatchSource:0}: Error finding container 3dc8b107c09eac172a7094c998112ee87c6b4bfeaf139f15f7f1b6d2376ca389: Status 404 returned error can't find the container with id 3dc8b107c09eac172a7094c998112ee87c6b4bfeaf139f15f7f1b6d2376ca389 Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.640167 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" event={"ID":"8f04ee95-76b7-461f-8b63-3724206b1296","Type":"ContainerStarted","Data":"3dc8b107c09eac172a7094c998112ee87c6b4bfeaf139f15f7f1b6d2376ca389"} Feb 02 09:13:59 crc kubenswrapper[4720]: I0202 09:13:59.641460 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" event={"ID":"b59796c6-17b1-494a-aef6-e230e60af54a","Type":"ContainerStarted","Data":"bd2bfca0fe482d0fb3ed223a52b0216a77181d5167ce5035947b044200afa1c2"} Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.714124 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b4dpx"] Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.735944 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wh75d"] Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.737025 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.750189 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wh75d"] Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.809856 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hnp4\" (UniqueName: \"kubernetes.io/projected/e13c3615-2a60-40a6-b6a4-a5c410520373-kube-api-access-5hnp4\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.809942 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.809986 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-config\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.913922 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-config\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.914031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hnp4\" (UniqueName: \"kubernetes.io/projected/e13c3615-2a60-40a6-b6a4-a5c410520373-kube-api-access-5hnp4\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.914104 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.916424 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-config\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.919355 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:01 crc kubenswrapper[4720]: I0202 09:14:01.964022 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hnp4\" (UniqueName: \"kubernetes.io/projected/e13c3615-2a60-40a6-b6a4-a5c410520373-kube-api-access-5hnp4\") pod \"dnsmasq-dns-666b6646f7-wh75d\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.017055 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hk9c6"] Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.037479 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5p4m"] Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.038843 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.054762 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5p4m"] Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.066484 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.117055 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-config\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.117116 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hj88\" (UniqueName: \"kubernetes.io/projected/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-kube-api-access-8hj88\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.117166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.219640 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.219930 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-config\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.219960 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hj88\" (UniqueName: \"kubernetes.io/projected/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-kube-api-access-8hj88\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.220916 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.220926 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-config\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.237535 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hj88\" (UniqueName: \"kubernetes.io/projected/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-kube-api-access-8hj88\") pod \"dnsmasq-dns-57d769cc4f-b5p4m\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.369357 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.597015 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wh75d"] Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.672958 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" event={"ID":"e13c3615-2a60-40a6-b6a4-a5c410520373","Type":"ContainerStarted","Data":"fedc74913af89d568f3f22c90b7dc28c439f81723376d0b4b5e412879eec079a"} Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.824042 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5p4m"] Feb 02 09:14:02 crc kubenswrapper[4720]: W0202 09:14:02.827183 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba3c766a_6925_496d_bf40_1d93aa6a1d8c.slice/crio-27be7a99a81ccef5921cb56beec6500f954f44adee3bb4dadc2fd093e4b41d26 WatchSource:0}: Error finding container 27be7a99a81ccef5921cb56beec6500f954f44adee3bb4dadc2fd093e4b41d26: Status 404 returned error can't find the container with id 27be7a99a81ccef5921cb56beec6500f954f44adee3bb4dadc2fd093e4b41d26 Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.880151 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.881274 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.883999 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.884125 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.884174 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.884335 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.884457 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b9tk4" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.884840 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.887972 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.904474 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927582 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927642 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927666 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927793 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927826 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927849 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927963 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.927982 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.928010 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rm5\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-kube-api-access-g4rm5\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:02 crc kubenswrapper[4720]: I0202 09:14:02.928035 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029269 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029289 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029313 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029374 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029478 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029544 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rm5\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-kube-api-access-g4rm5\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029583 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.029620 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.030157 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-config-data\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.030333 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.030444 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.030492 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.030633 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.031635 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.035994 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.036012 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.036078 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.042293 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.046146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rm5\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-kube-api-access-g4rm5\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.054689 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.183557 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.198936 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.199047 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.203456 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.203748 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4p7fg" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.204179 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.204600 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.204653 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.205015 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.205027 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.208354 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233241 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233286 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233308 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233332 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6dwl\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-kube-api-access-m6dwl\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233358 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233376 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/527ad190-1f46-4b04-8379-72f150ba294d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233396 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/527ad190-1f46-4b04-8379-72f150ba294d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233422 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233436 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233455 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.233474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334571 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334678 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334721 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6dwl\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-kube-api-access-m6dwl\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334739 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334755 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/527ad190-1f46-4b04-8379-72f150ba294d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334777 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/527ad190-1f46-4b04-8379-72f150ba294d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.334804 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.337223 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.337801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.338099 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.338362 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.338399 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.338712 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.342698 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.344249 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.346050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/527ad190-1f46-4b04-8379-72f150ba294d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.351465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.353684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/527ad190-1f46-4b04-8379-72f150ba294d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.357505 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6dwl\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-kube-api-access-m6dwl\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.372672 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.524118 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:14:03 crc kubenswrapper[4720]: I0202 09:14:03.681866 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" event={"ID":"ba3c766a-6925-496d-bf40-1d93aa6a1d8c","Type":"ContainerStarted","Data":"27be7a99a81ccef5921cb56beec6500f954f44adee3bb4dadc2fd093e4b41d26"} Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.422194 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.425155 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.428740 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.432334 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dvhvd" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.432561 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.432780 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.460351 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.468751 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-kolla-config\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558360 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-config-data-default\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558387 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwq2b\" (UniqueName: \"kubernetes.io/projected/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-kube-api-access-pwq2b\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558455 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-operator-scripts\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558495 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558525 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558581 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.558623 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-config-data-generated\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.659998 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-operator-scripts\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660038 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660105 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660141 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-config-data-generated\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-kolla-config\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-config-data-default\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.660202 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwq2b\" (UniqueName: \"kubernetes.io/projected/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-kube-api-access-pwq2b\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.662489 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.663180 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-config-data-generated\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.664109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-kolla-config\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.664418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-config-data-default\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.664587 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-operator-scripts\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.671786 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.681548 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwq2b\" (UniqueName: \"kubernetes.io/projected/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-kube-api-access-pwq2b\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.683305 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289905c2-8b8c-4d85-a9d4-19ac7c9b9b06-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.703791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06\") " pod="openstack/openstack-galera-0" Feb 02 09:14:04 crc kubenswrapper[4720]: I0202 09:14:04.766467 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:05.997308 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:05.998748 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.009499 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.010132 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.010327 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.024391 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.038250 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4brx2" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.084872 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.084977 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.085035 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c13267-2f9e-4e1c-b52f-66be31da5155-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.085077 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29c13267-2f9e-4e1c-b52f-66be31da5155-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.085108 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8h9\" (UniqueName: \"kubernetes.io/projected/29c13267-2f9e-4e1c-b52f-66be31da5155-kube-api-access-5d8h9\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.085152 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.085188 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.085224 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c13267-2f9e-4e1c-b52f-66be31da5155-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.185985 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c13267-2f9e-4e1c-b52f-66be31da5155-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186026 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29c13267-2f9e-4e1c-b52f-66be31da5155-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186055 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8h9\" (UniqueName: \"kubernetes.io/projected/29c13267-2f9e-4e1c-b52f-66be31da5155-kube-api-access-5d8h9\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186077 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186114 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c13267-2f9e-4e1c-b52f-66be31da5155-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186169 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.186194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.187029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.187863 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.193240 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.194443 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c13267-2f9e-4e1c-b52f-66be31da5155-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.196407 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c13267-2f9e-4e1c-b52f-66be31da5155-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.205146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29c13267-2f9e-4e1c-b52f-66be31da5155-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.205540 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c13267-2f9e-4e1c-b52f-66be31da5155-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.212832 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.223863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8h9\" (UniqueName: \"kubernetes.io/projected/29c13267-2f9e-4e1c-b52f-66be31da5155-kube-api-access-5d8h9\") pod \"openstack-cell1-galera-0\" (UID: \"29c13267-2f9e-4e1c-b52f-66be31da5155\") " pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.340284 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.373675 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.374870 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.376857 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.383609 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4fp42" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.383680 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.414792 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.489386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a7ecf-2ee4-4f15-9785-bc895935d771-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.489445 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3a7ecf-2ee4-4f15-9785-bc895935d771-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.489491 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f3a7ecf-2ee4-4f15-9785-bc895935d771-kolla-config\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.489530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f3a7ecf-2ee4-4f15-9785-bc895935d771-config-data\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.489560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhq8\" (UniqueName: \"kubernetes.io/projected/8f3a7ecf-2ee4-4f15-9785-bc895935d771-kube-api-access-xdhq8\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.591606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a7ecf-2ee4-4f15-9785-bc895935d771-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.591864 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3a7ecf-2ee4-4f15-9785-bc895935d771-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.591909 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f3a7ecf-2ee4-4f15-9785-bc895935d771-kolla-config\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.591940 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f3a7ecf-2ee4-4f15-9785-bc895935d771-config-data\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.591959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhq8\" (UniqueName: \"kubernetes.io/projected/8f3a7ecf-2ee4-4f15-9785-bc895935d771-kube-api-access-xdhq8\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.592748 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f3a7ecf-2ee4-4f15-9785-bc895935d771-config-data\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.592990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f3a7ecf-2ee4-4f15-9785-bc895935d771-kolla-config\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.596621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3a7ecf-2ee4-4f15-9785-bc895935d771-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.598283 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f3a7ecf-2ee4-4f15-9785-bc895935d771-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.605566 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhq8\" (UniqueName: \"kubernetes.io/projected/8f3a7ecf-2ee4-4f15-9785-bc895935d771-kube-api-access-xdhq8\") pod \"memcached-0\" (UID: \"8f3a7ecf-2ee4-4f15-9785-bc895935d771\") " pod="openstack/memcached-0" Feb 02 09:14:06 crc kubenswrapper[4720]: I0202 09:14:06.692413 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.099138 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.100270 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.103757 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2gpg2" Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.114832 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.234363 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvp2v\" (UniqueName: \"kubernetes.io/projected/26b9fd3f-f554-4920-ba34-8e8dc34b78ed-kube-api-access-mvp2v\") pod \"kube-state-metrics-0\" (UID: \"26b9fd3f-f554-4920-ba34-8e8dc34b78ed\") " pod="openstack/kube-state-metrics-0" Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.335553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvp2v\" (UniqueName: \"kubernetes.io/projected/26b9fd3f-f554-4920-ba34-8e8dc34b78ed-kube-api-access-mvp2v\") pod \"kube-state-metrics-0\" (UID: \"26b9fd3f-f554-4920-ba34-8e8dc34b78ed\") " pod="openstack/kube-state-metrics-0" Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.357765 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvp2v\" (UniqueName: \"kubernetes.io/projected/26b9fd3f-f554-4920-ba34-8e8dc34b78ed-kube-api-access-mvp2v\") pod \"kube-state-metrics-0\" (UID: \"26b9fd3f-f554-4920-ba34-8e8dc34b78ed\") " pod="openstack/kube-state-metrics-0" Feb 02 09:14:08 crc kubenswrapper[4720]: I0202 09:14:08.414421 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.715212 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-774qf"] Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.716468 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.718979 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.719168 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7n6l6" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.722930 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.728904 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-b979n"] Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.731713 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.736013 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-774qf"] Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.751401 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b979n"] Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799656 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35455de2-123c-442f-88de-e3fa878b3c09-scripts\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-log-ovn\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799740 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-lib\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-run\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799780 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-etc-ovs\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799797 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-run\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799825 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c88c8b-430e-40d7-9598-464d1dbead23-scripts\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799853 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c88c8b-430e-40d7-9598-464d1dbead23-ovn-controller-tls-certs\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799906 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c88c8b-430e-40d7-9598-464d1dbead23-combined-ca-bundle\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799922 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfxm\" (UniqueName: \"kubernetes.io/projected/35455de2-123c-442f-88de-e3fa878b3c09-kube-api-access-zsfxm\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799948 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85j7\" (UniqueName: \"kubernetes.io/projected/57c88c8b-430e-40d7-9598-464d1dbead23-kube-api-access-f85j7\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-log\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.799978 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-run-ovn\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.901695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35455de2-123c-442f-88de-e3fa878b3c09-scripts\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.901790 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-log-ovn\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.901826 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-lib\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.901856 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-run\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.901903 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-etc-ovs\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-run\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902154 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c88c8b-430e-40d7-9598-464d1dbead23-scripts\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902186 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c88c8b-430e-40d7-9598-464d1dbead23-ovn-controller-tls-certs\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902228 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c88c8b-430e-40d7-9598-464d1dbead23-combined-ca-bundle\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902244 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfxm\" (UniqueName: \"kubernetes.io/projected/35455de2-123c-442f-88de-e3fa878b3c09-kube-api-access-zsfxm\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902270 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f85j7\" (UniqueName: \"kubernetes.io/projected/57c88c8b-430e-40d7-9598-464d1dbead23-kube-api-access-f85j7\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902289 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-run-ovn\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902304 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-log\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902446 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-etc-ovs\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902571 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-run\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902673 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-run\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902692 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-lib\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902771 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35455de2-123c-442f-88de-e3fa878b3c09-var-log\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902932 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-log-ovn\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.902934 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57c88c8b-430e-40d7-9598-464d1dbead23-var-run-ovn\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.904723 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35455de2-123c-442f-88de-e3fa878b3c09-scripts\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.906586 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c88c8b-430e-40d7-9598-464d1dbead23-scripts\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.908746 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c88c8b-430e-40d7-9598-464d1dbead23-ovn-controller-tls-certs\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.909470 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c88c8b-430e-40d7-9598-464d1dbead23-combined-ca-bundle\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.925011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfxm\" (UniqueName: \"kubernetes.io/projected/35455de2-123c-442f-88de-e3fa878b3c09-kube-api-access-zsfxm\") pod \"ovn-controller-ovs-b979n\" (UID: \"35455de2-123c-442f-88de-e3fa878b3c09\") " pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:11 crc kubenswrapper[4720]: I0202 09:14:11.925701 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f85j7\" (UniqueName: \"kubernetes.io/projected/57c88c8b-430e-40d7-9598-464d1dbead23-kube-api-access-f85j7\") pod \"ovn-controller-774qf\" (UID: \"57c88c8b-430e-40d7-9598-464d1dbead23\") " pod="openstack/ovn-controller-774qf" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.035222 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-774qf" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.080839 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.610504 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.613204 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.615976 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.616066 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.616600 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.616807 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.617140 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-55wlg" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.624421 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.733843 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ll9\" (UniqueName: \"kubernetes.io/projected/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-kube-api-access-24ll9\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.733911 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.733944 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.733963 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.734113 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.734142 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.734181 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.734244 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-config\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835051 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835139 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-config\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835171 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24ll9\" (UniqueName: \"kubernetes.io/projected/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-kube-api-access-24ll9\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835190 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835213 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835228 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835257 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835280 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.835530 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.836259 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.836327 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.837742 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-config\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.839370 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.839631 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.842708 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.855371 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24ll9\" (UniqueName: \"kubernetes.io/projected/871b4d00-52ff-41e8-9e5a-6f1e567dcef5-kube-api-access-24ll9\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.858160 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"871b4d00-52ff-41e8-9e5a-6f1e567dcef5\") " pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:12 crc kubenswrapper[4720]: I0202 09:14:12.941547 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:14 crc kubenswrapper[4720]: E0202 09:14:14.649803 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 09:14:14 crc kubenswrapper[4720]: E0202 09:14:14.650389 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxnx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-b4dpx_openstack(b59796c6-17b1-494a-aef6-e230e60af54a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:14:14 crc kubenswrapper[4720]: E0202 09:14:14.651930 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" podUID="b59796c6-17b1-494a-aef6-e230e60af54a" Feb 02 09:14:14 crc kubenswrapper[4720]: E0202 09:14:14.719405 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 09:14:14 crc kubenswrapper[4720]: E0202 09:14:14.719545 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g62p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-hk9c6_openstack(8f04ee95-76b7-461f-8b63-3724206b1296): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:14:14 crc kubenswrapper[4720]: E0202 09:14:14.721531 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" podUID="8f04ee95-76b7-461f-8b63-3724206b1296" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.412652 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: W0202 09:14:15.412663 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cda7a8a_d405_4c4f_b8c2_bf75323634b9.slice/crio-1398a8e90971e6dccf9e22f0ac93965cef33c70b553daed87bdf37223af6d8ec WatchSource:0}: Error finding container 1398a8e90971e6dccf9e22f0ac93965cef33c70b553daed87bdf37223af6d8ec: Status 404 returned error can't find the container with id 1398a8e90971e6dccf9e22f0ac93965cef33c70b553daed87bdf37223af6d8ec Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.422741 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.445661 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.452400 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.467067 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.480889 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.587483 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-config\") pod \"8f04ee95-76b7-461f-8b63-3724206b1296\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.587537 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxnx8\" (UniqueName: \"kubernetes.io/projected/b59796c6-17b1-494a-aef6-e230e60af54a-kube-api-access-fxnx8\") pod \"b59796c6-17b1-494a-aef6-e230e60af54a\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.587610 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-dns-svc\") pod \"8f04ee95-76b7-461f-8b63-3724206b1296\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.587638 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59796c6-17b1-494a-aef6-e230e60af54a-config\") pod \"b59796c6-17b1-494a-aef6-e230e60af54a\" (UID: \"b59796c6-17b1-494a-aef6-e230e60af54a\") " Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.587723 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g62p\" (UniqueName: \"kubernetes.io/projected/8f04ee95-76b7-461f-8b63-3724206b1296-kube-api-access-6g62p\") pod \"8f04ee95-76b7-461f-8b63-3724206b1296\" (UID: \"8f04ee95-76b7-461f-8b63-3724206b1296\") " Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.588326 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59796c6-17b1-494a-aef6-e230e60af54a-config" (OuterVolumeSpecName: "config") pod "b59796c6-17b1-494a-aef6-e230e60af54a" (UID: "b59796c6-17b1-494a-aef6-e230e60af54a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.589164 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-config" (OuterVolumeSpecName: "config") pod "8f04ee95-76b7-461f-8b63-3724206b1296" (UID: "8f04ee95-76b7-461f-8b63-3724206b1296"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.593634 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f04ee95-76b7-461f-8b63-3724206b1296-kube-api-access-6g62p" (OuterVolumeSpecName: "kube-api-access-6g62p") pod "8f04ee95-76b7-461f-8b63-3724206b1296" (UID: "8f04ee95-76b7-461f-8b63-3724206b1296"). InnerVolumeSpecName "kube-api-access-6g62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.597232 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.600430 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f04ee95-76b7-461f-8b63-3724206b1296" (UID: "8f04ee95-76b7-461f-8b63-3724206b1296"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.603136 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.603311 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59796c6-17b1-494a-aef6-e230e60af54a-kube-api-access-fxnx8" (OuterVolumeSpecName: "kube-api-access-fxnx8") pod "b59796c6-17b1-494a-aef6-e230e60af54a" (UID: "b59796c6-17b1-494a-aef6-e230e60af54a"). InnerVolumeSpecName "kube-api-access-fxnx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.616355 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-774qf"] Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.689521 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g62p\" (UniqueName: \"kubernetes.io/projected/8f04ee95-76b7-461f-8b63-3724206b1296-kube-api-access-6g62p\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.690652 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.690754 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxnx8\" (UniqueName: \"kubernetes.io/projected/b59796c6-17b1-494a-aef6-e230e60af54a-kube-api-access-fxnx8\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.690834 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f04ee95-76b7-461f-8b63-3724206b1296-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.690948 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59796c6-17b1-494a-aef6-e230e60af54a-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.702353 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 09:14:15 crc kubenswrapper[4720]: W0202 09:14:15.707770 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod871b4d00_52ff_41e8_9e5a_6f1e567dcef5.slice/crio-c55bea9d55ce02203eb419595412c066cecdfaeb960c0a6c67b145d123b0f9dd WatchSource:0}: Error finding container c55bea9d55ce02203eb419595412c066cecdfaeb960c0a6c67b145d123b0f9dd: Status 404 returned error can't find the container with id c55bea9d55ce02203eb419595412c066cecdfaeb960c0a6c67b145d123b0f9dd Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.815409 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cda7a8a-d405-4c4f-b8c2-bf75323634b9","Type":"ContainerStarted","Data":"1398a8e90971e6dccf9e22f0ac93965cef33c70b553daed87bdf37223af6d8ec"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.818335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"527ad190-1f46-4b04-8379-72f150ba294d","Type":"ContainerStarted","Data":"21475c33bd904e25dfe8eb9ffde56cefb81552f81c640fe7f50566e4a016d55b"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.820080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29c13267-2f9e-4e1c-b52f-66be31da5155","Type":"ContainerStarted","Data":"a419b96f1b7980d24870d8a9188367dd74f01a7ebb17ea23519ae43a68073596"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.825066 4720 generic.go:334] "Generic (PLEG): container finished" podID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerID="b164e246c3b2b6128ef42247419bed1600f944ffe71d6e4802f564f73a24c193" exitCode=0 Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.825163 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" event={"ID":"e13c3615-2a60-40a6-b6a4-a5c410520373","Type":"ContainerDied","Data":"b164e246c3b2b6128ef42247419bed1600f944ffe71d6e4802f564f73a24c193"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.826509 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-774qf" event={"ID":"57c88c8b-430e-40d7-9598-464d1dbead23","Type":"ContainerStarted","Data":"a1f75e8d2f663b2e25120809b08f6f5099ec8af53cb0742c4519bf67992dd042"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.829346 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerID="7ee01a0d2722e24e082255623437d878e678129e9cef2b69c84d51b23b485eac" exitCode=0 Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.829454 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" event={"ID":"ba3c766a-6925-496d-bf40-1d93aa6a1d8c","Type":"ContainerDied","Data":"7ee01a0d2722e24e082255623437d878e678129e9cef2b69c84d51b23b485eac"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.830646 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"871b4d00-52ff-41e8-9e5a-6f1e567dcef5","Type":"ContainerStarted","Data":"c55bea9d55ce02203eb419595412c066cecdfaeb960c0a6c67b145d123b0f9dd"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.833318 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f3a7ecf-2ee4-4f15-9785-bc895935d771","Type":"ContainerStarted","Data":"c2df19903352e6b54029c65a029035a4db26641274ba6fc4223d1840bb0bdff5"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.834484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26b9fd3f-f554-4920-ba34-8e8dc34b78ed","Type":"ContainerStarted","Data":"9f4a4a91f2c2598e4a1d3cf698dcdecfa87a0c864e2bb2605da79cd467fe0419"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.835358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" event={"ID":"b59796c6-17b1-494a-aef6-e230e60af54a","Type":"ContainerDied","Data":"bd2bfca0fe482d0fb3ed223a52b0216a77181d5167ce5035947b044200afa1c2"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.835368 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b4dpx" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.836129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06","Type":"ContainerStarted","Data":"a79bd60b5a0b20278f29b21abfaf8f980adaf2012ae4c68d901b17b80cfae9d3"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.837146 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" event={"ID":"8f04ee95-76b7-461f-8b63-3724206b1296","Type":"ContainerDied","Data":"3dc8b107c09eac172a7094c998112ee87c6b4bfeaf139f15f7f1b6d2376ca389"} Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.837216 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hk9c6" Feb 02 09:14:15 crc kubenswrapper[4720]: I0202 09:14:15.878334 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b979n"] Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.044252 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b4dpx"] Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.064535 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b4dpx"] Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.073788 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hk9c6"] Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.088927 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hk9c6"] Feb 02 09:14:16 crc kubenswrapper[4720]: E0202 09:14:16.109053 4720 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 02 09:14:16 crc kubenswrapper[4720]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e13c3615-2a60-40a6-b6a4-a5c410520373/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 09:14:16 crc kubenswrapper[4720]: > podSandboxID="fedc74913af89d568f3f22c90b7dc28c439f81723376d0b4b5e412879eec079a" Feb 02 09:14:16 crc kubenswrapper[4720]: E0202 09:14:16.109177 4720 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 09:14:16 crc kubenswrapper[4720]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hnp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-wh75d_openstack(e13c3615-2a60-40a6-b6a4-a5c410520373): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e13c3615-2a60-40a6-b6a4-a5c410520373/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 09:14:16 crc kubenswrapper[4720]: > logger="UnhandledError" Feb 02 09:14:16 crc kubenswrapper[4720]: E0202 09:14:16.110317 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e13c3615-2a60-40a6-b6a4-a5c410520373/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.138719 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.142676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.146120 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.146516 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nzs27" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.147326 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.147572 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.151869 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307242 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1e015f3-dd0c-4380-ad73-362c5f1b704f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307339 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwqjc\" (UniqueName: \"kubernetes.io/projected/d1e015f3-dd0c-4380-ad73-362c5f1b704f-kube-api-access-wwqjc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307561 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307622 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307643 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d1e015f3-dd0c-4380-ad73-362c5f1b704f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307680 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e015f3-dd0c-4380-ad73-362c5f1b704f-config\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.307721 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409229 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwqjc\" (UniqueName: \"kubernetes.io/projected/d1e015f3-dd0c-4380-ad73-362c5f1b704f-kube-api-access-wwqjc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409592 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409654 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409677 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d1e015f3-dd0c-4380-ad73-362c5f1b704f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409709 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e015f3-dd0c-4380-ad73-362c5f1b704f-config\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409738 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409771 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1e015f3-dd0c-4380-ad73-362c5f1b704f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.409789 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.410526 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.412699 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d1e015f3-dd0c-4380-ad73-362c5f1b704f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.413469 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e015f3-dd0c-4380-ad73-362c5f1b704f-config\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.415603 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1e015f3-dd0c-4380-ad73-362c5f1b704f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.416554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.416634 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.416711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e015f3-dd0c-4380-ad73-362c5f1b704f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.430676 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwqjc\" (UniqueName: \"kubernetes.io/projected/d1e015f3-dd0c-4380-ad73-362c5f1b704f-kube-api-access-wwqjc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.460759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d1e015f3-dd0c-4380-ad73-362c5f1b704f\") " pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.497375 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.846133 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b979n" event={"ID":"35455de2-123c-442f-88de-e3fa878b3c09","Type":"ContainerStarted","Data":"f9b7867713b935a86625073fa1a13ddf16fa78f6aecf36965cf54b58d4d38041"} Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.851137 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" event={"ID":"ba3c766a-6925-496d-bf40-1d93aa6a1d8c","Type":"ContainerStarted","Data":"6800d3e099ed76e5266c1abdd1cc39f0fa30f19c525465349a3fe05432ee6e96"} Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.851740 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.903130 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f04ee95-76b7-461f-8b63-3724206b1296" path="/var/lib/kubelet/pods/8f04ee95-76b7-461f-8b63-3724206b1296/volumes" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.903522 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59796c6-17b1-494a-aef6-e230e60af54a" path="/var/lib/kubelet/pods/b59796c6-17b1-494a-aef6-e230e60af54a/volumes" Feb 02 09:14:16 crc kubenswrapper[4720]: I0202 09:14:16.922231 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" podStartSLOduration=2.907450312 podStartE2EDuration="14.922211274s" podCreationTimestamp="2026-02-02 09:14:02 +0000 UTC" firstStartedPulling="2026-02-02 09:14:02.8296 +0000 UTC m=+1076.685225556" lastFinishedPulling="2026-02-02 09:14:14.844360962 +0000 UTC m=+1088.699986518" observedRunningTime="2026-02-02 09:14:16.893411813 +0000 UTC m=+1090.749037369" watchObservedRunningTime="2026-02-02 09:14:16.922211274 +0000 UTC m=+1090.777836830" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.087965 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.335405 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mgchh"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.341825 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.343245 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.348248 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mgchh"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.425965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8n7\" (UniqueName: \"kubernetes.io/projected/3b570bee-e4d7-4d5a-98a2-939066b0dff4-kube-api-access-tb8n7\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.426108 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3b570bee-e4d7-4d5a-98a2-939066b0dff4-ovn-rundir\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.426199 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b570bee-e4d7-4d5a-98a2-939066b0dff4-combined-ca-bundle\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.426232 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3b570bee-e4d7-4d5a-98a2-939066b0dff4-ovs-rundir\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.426251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b570bee-e4d7-4d5a-98a2-939066b0dff4-config\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.426279 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b570bee-e4d7-4d5a-98a2-939066b0dff4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.460945 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5p4m"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.476670 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8xztx"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.478154 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.480541 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.483338 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8xztx"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.530796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8n7\" (UniqueName: \"kubernetes.io/projected/3b570bee-e4d7-4d5a-98a2-939066b0dff4-kube-api-access-tb8n7\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.535015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3b570bee-e4d7-4d5a-98a2-939066b0dff4-ovn-rundir\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.535259 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b570bee-e4d7-4d5a-98a2-939066b0dff4-combined-ca-bundle\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.535315 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3b570bee-e4d7-4d5a-98a2-939066b0dff4-ovs-rundir\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.535347 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b570bee-e4d7-4d5a-98a2-939066b0dff4-config\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.535392 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b570bee-e4d7-4d5a-98a2-939066b0dff4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.536434 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3b570bee-e4d7-4d5a-98a2-939066b0dff4-ovs-rundir\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.536691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3b570bee-e4d7-4d5a-98a2-939066b0dff4-ovn-rundir\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.537379 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b570bee-e4d7-4d5a-98a2-939066b0dff4-config\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.547778 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b570bee-e4d7-4d5a-98a2-939066b0dff4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.549121 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b570bee-e4d7-4d5a-98a2-939066b0dff4-combined-ca-bundle\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.551156 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8n7\" (UniqueName: \"kubernetes.io/projected/3b570bee-e4d7-4d5a-98a2-939066b0dff4-kube-api-access-tb8n7\") pod \"ovn-controller-metrics-mgchh\" (UID: \"3b570bee-e4d7-4d5a-98a2-939066b0dff4\") " pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.595563 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wh75d"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.619909 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bw85"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.630000 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.633392 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.637787 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-config\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.637845 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.637914 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2m6\" (UniqueName: \"kubernetes.io/projected/88f6155b-8daf-44bb-a369-b07421ae38b2-kube-api-access-sd2m6\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.637942 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.647330 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bw85"] Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.674853 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mgchh" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.739712 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-config\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.739762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.739803 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd2m6\" (UniqueName: \"kubernetes.io/projected/88f6155b-8daf-44bb-a369-b07421ae38b2-kube-api-access-sd2m6\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.739824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.739847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.739897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-config\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.740145 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.740276 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nkj2\" (UniqueName: \"kubernetes.io/projected/2641f426-af38-4405-8986-3edf8b8401db-kube-api-access-8nkj2\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.740344 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.740809 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.740870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.741501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-config\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.756773 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd2m6\" (UniqueName: \"kubernetes.io/projected/88f6155b-8daf-44bb-a369-b07421ae38b2-kube-api-access-sd2m6\") pod \"dnsmasq-dns-7fd796d7df-8xztx\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.824018 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.842183 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.842297 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.842342 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-config\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.842376 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.842424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nkj2\" (UniqueName: \"kubernetes.io/projected/2641f426-af38-4405-8986-3edf8b8401db-kube-api-access-8nkj2\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.843608 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.844386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.845261 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-config\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.846006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.861873 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nkj2\" (UniqueName: \"kubernetes.io/projected/2641f426-af38-4405-8986-3edf8b8401db-kube-api-access-8nkj2\") pod \"dnsmasq-dns-86db49b7ff-7bw85\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.868241 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d1e015f3-dd0c-4380-ad73-362c5f1b704f","Type":"ContainerStarted","Data":"c474eb60dbbb8471432294a15d52007de0e788a9b51a08683d1d9e70e5985c64"} Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.872831 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerName="dnsmasq-dns" containerID="cri-o://c4d4d5da4c13449463149ba9d7eb8e56606a3ab571524a9e1dbb373cb26796ff" gracePeriod=10 Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.872918 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" event={"ID":"e13c3615-2a60-40a6-b6a4-a5c410520373","Type":"ContainerStarted","Data":"c4d4d5da4c13449463149ba9d7eb8e56606a3ab571524a9e1dbb373cb26796ff"} Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.872948 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.892571 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" podStartSLOduration=4.663203517 podStartE2EDuration="16.892554579s" podCreationTimestamp="2026-02-02 09:14:01 +0000 UTC" firstStartedPulling="2026-02-02 09:14:02.610009496 +0000 UTC m=+1076.465635052" lastFinishedPulling="2026-02-02 09:14:14.839360558 +0000 UTC m=+1088.694986114" observedRunningTime="2026-02-02 09:14:17.890494838 +0000 UTC m=+1091.746120394" watchObservedRunningTime="2026-02-02 09:14:17.892554579 +0000 UTC m=+1091.748180135" Feb 02 09:14:17 crc kubenswrapper[4720]: I0202 09:14:17.953268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:18 crc kubenswrapper[4720]: I0202 09:14:18.887252 4720 generic.go:334] "Generic (PLEG): container finished" podID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerID="c4d4d5da4c13449463149ba9d7eb8e56606a3ab571524a9e1dbb373cb26796ff" exitCode=0 Feb 02 09:14:18 crc kubenswrapper[4720]: I0202 09:14:18.887786 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="dnsmasq-dns" containerID="cri-o://6800d3e099ed76e5266c1abdd1cc39f0fa30f19c525465349a3fe05432ee6e96" gracePeriod=10 Feb 02 09:14:18 crc kubenswrapper[4720]: I0202 09:14:18.899504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" event={"ID":"e13c3615-2a60-40a6-b6a4-a5c410520373","Type":"ContainerDied","Data":"c4d4d5da4c13449463149ba9d7eb8e56606a3ab571524a9e1dbb373cb26796ff"} Feb 02 09:14:18 crc kubenswrapper[4720]: I0202 09:14:18.899541 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" event={"ID":"e13c3615-2a60-40a6-b6a4-a5c410520373","Type":"ContainerDied","Data":"fedc74913af89d568f3f22c90b7dc28c439f81723376d0b4b5e412879eec079a"} Feb 02 09:14:18 crc kubenswrapper[4720]: I0202 09:14:18.899553 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fedc74913af89d568f3f22c90b7dc28c439f81723376d0b4b5e412879eec079a" Feb 02 09:14:18 crc kubenswrapper[4720]: I0202 09:14:18.985790 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.164624 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-dns-svc\") pod \"e13c3615-2a60-40a6-b6a4-a5c410520373\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.164685 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hnp4\" (UniqueName: \"kubernetes.io/projected/e13c3615-2a60-40a6-b6a4-a5c410520373-kube-api-access-5hnp4\") pod \"e13c3615-2a60-40a6-b6a4-a5c410520373\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.164756 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-config\") pod \"e13c3615-2a60-40a6-b6a4-a5c410520373\" (UID: \"e13c3615-2a60-40a6-b6a4-a5c410520373\") " Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.168453 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13c3615-2a60-40a6-b6a4-a5c410520373-kube-api-access-5hnp4" (OuterVolumeSpecName: "kube-api-access-5hnp4") pod "e13c3615-2a60-40a6-b6a4-a5c410520373" (UID: "e13c3615-2a60-40a6-b6a4-a5c410520373"). InnerVolumeSpecName "kube-api-access-5hnp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.206953 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-config" (OuterVolumeSpecName: "config") pod "e13c3615-2a60-40a6-b6a4-a5c410520373" (UID: "e13c3615-2a60-40a6-b6a4-a5c410520373"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.209860 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e13c3615-2a60-40a6-b6a4-a5c410520373" (UID: "e13c3615-2a60-40a6-b6a4-a5c410520373"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.266416 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.266455 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hnp4\" (UniqueName: \"kubernetes.io/projected/e13c3615-2a60-40a6-b6a4-a5c410520373-kube-api-access-5hnp4\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.266469 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13c3615-2a60-40a6-b6a4-a5c410520373-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.912463 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerID="6800d3e099ed76e5266c1abdd1cc39f0fa30f19c525465349a3fe05432ee6e96" exitCode=0 Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.912506 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" event={"ID":"ba3c766a-6925-496d-bf40-1d93aa6a1d8c","Type":"ContainerDied","Data":"6800d3e099ed76e5266c1abdd1cc39f0fa30f19c525465349a3fe05432ee6e96"} Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.912836 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wh75d" Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.955304 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wh75d"] Feb 02 09:14:19 crc kubenswrapper[4720]: I0202 09:14:19.963924 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wh75d"] Feb 02 09:14:20 crc kubenswrapper[4720]: I0202 09:14:20.907716 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" path="/var/lib/kubelet/pods/e13c3615-2a60-40a6-b6a4-a5c410520373/volumes" Feb 02 09:14:22 crc kubenswrapper[4720]: I0202 09:14:22.371715 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: connect: connection refused" Feb 02 09:14:31 crc kubenswrapper[4720]: E0202 09:14:31.058341 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 02 09:14:31 crc kubenswrapper[4720]: E0202 09:14:31.059121 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwq2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(289905c2-8b8c-4d85-a9d4-19ac7c9b9b06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:14:31 crc kubenswrapper[4720]: E0202 09:14:31.060288 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="289905c2-8b8c-4d85-a9d4-19ac7c9b9b06" Feb 02 09:14:31 crc kubenswrapper[4720]: E0202 09:14:31.247720 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 02 09:14:31 crc kubenswrapper[4720]: E0202 09:14:31.248086 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5d8h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(29c13267-2f9e-4e1c-b52f-66be31da5155): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:14:31 crc kubenswrapper[4720]: E0202 09:14:31.249221 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="29c13267-2f9e-4e1c-b52f-66be31da5155" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.285056 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.401687 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-config\") pod \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.401780 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hj88\" (UniqueName: \"kubernetes.io/projected/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-kube-api-access-8hj88\") pod \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.402006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-dns-svc\") pod \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\" (UID: \"ba3c766a-6925-496d-bf40-1d93aa6a1d8c\") " Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.406859 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-kube-api-access-8hj88" (OuterVolumeSpecName: "kube-api-access-8hj88") pod "ba3c766a-6925-496d-bf40-1d93aa6a1d8c" (UID: "ba3c766a-6925-496d-bf40-1d93aa6a1d8c"). InnerVolumeSpecName "kube-api-access-8hj88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.435723 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-config" (OuterVolumeSpecName: "config") pod "ba3c766a-6925-496d-bf40-1d93aa6a1d8c" (UID: "ba3c766a-6925-496d-bf40-1d93aa6a1d8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.445463 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba3c766a-6925-496d-bf40-1d93aa6a1d8c" (UID: "ba3c766a-6925-496d-bf40-1d93aa6a1d8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.503219 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.503252 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hj88\" (UniqueName: \"kubernetes.io/projected/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-kube-api-access-8hj88\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.503264 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba3c766a-6925-496d-bf40-1d93aa6a1d8c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.960247 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bw85"] Feb 02 09:14:31 crc kubenswrapper[4720]: I0202 09:14:31.966249 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8xztx"] Feb 02 09:14:31 crc kubenswrapper[4720]: W0202 09:14:31.997535 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2641f426_af38_4405_8986_3edf8b8401db.slice/crio-f28c8c49e1559669a90daa0e6d3a930d1433a4409416988a19070211201b53f0 WatchSource:0}: Error finding container f28c8c49e1559669a90daa0e6d3a930d1433a4409416988a19070211201b53f0: Status 404 returned error can't find the container with id f28c8c49e1559669a90daa0e6d3a930d1433a4409416988a19070211201b53f0 Feb 02 09:14:32 crc kubenswrapper[4720]: W0202 09:14:32.002550 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88f6155b_8daf_44bb_a369_b07421ae38b2.slice/crio-e37ff2d0375c5e5990ff6d043e94a945b7a38842d8bf76b2514af1c808753c03 WatchSource:0}: Error finding container e37ff2d0375c5e5990ff6d043e94a945b7a38842d8bf76b2514af1c808753c03: Status 404 returned error can't find the container with id e37ff2d0375c5e5990ff6d043e94a945b7a38842d8bf76b2514af1c808753c03 Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.010418 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.011409 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" event={"ID":"ba3c766a-6925-496d-bf40-1d93aa6a1d8c","Type":"ContainerDied","Data":"27be7a99a81ccef5921cb56beec6500f954f44adee3bb4dadc2fd093e4b41d26"} Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.011454 4720 scope.go:117] "RemoveContainer" containerID="6800d3e099ed76e5266c1abdd1cc39f0fa30f19c525465349a3fe05432ee6e96" Feb 02 09:14:32 crc kubenswrapper[4720]: E0202 09:14:32.011578 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="289905c2-8b8c-4d85-a9d4-19ac7c9b9b06" Feb 02 09:14:32 crc kubenswrapper[4720]: E0202 09:14:32.013222 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="29c13267-2f9e-4e1c-b52f-66be31da5155" Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.062726 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mgchh"] Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.077866 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5p4m"] Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.089706 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5p4m"] Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.370636 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-b5p4m" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Feb 02 09:14:32 crc kubenswrapper[4720]: E0202 09:14:32.600509 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 02 09:14:32 crc kubenswrapper[4720]: E0202 09:14:32.600585 4720 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 02 09:14:32 crc kubenswrapper[4720]: E0202 09:14:32.600765 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvp2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(26b9fd3f-f554-4920-ba34-8e8dc34b78ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 09:14:32 crc kubenswrapper[4720]: E0202 09:14:32.602006 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.630933 4720 scope.go:117] "RemoveContainer" containerID="7ee01a0d2722e24e082255623437d878e678129e9cef2b69c84d51b23b485eac" Feb 02 09:14:32 crc kubenswrapper[4720]: I0202 09:14:32.927252 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" path="/var/lib/kubelet/pods/ba3c766a-6925-496d-bf40-1d93aa6a1d8c/volumes" Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.017470 4720 generic.go:334] "Generic (PLEG): container finished" podID="2641f426-af38-4405-8986-3edf8b8401db" containerID="d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3" exitCode=0 Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.017521 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" event={"ID":"2641f426-af38-4405-8986-3edf8b8401db","Type":"ContainerDied","Data":"d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.017543 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" event={"ID":"2641f426-af38-4405-8986-3edf8b8401db","Type":"ContainerStarted","Data":"f28c8c49e1559669a90daa0e6d3a930d1433a4409416988a19070211201b53f0"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.022160 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f3a7ecf-2ee4-4f15-9785-bc895935d771","Type":"ContainerStarted","Data":"eb34bd0bff3d037e79c478a740c217050e0a729706f270b0d9c65710cb210141"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.022389 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.029036 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d1e015f3-dd0c-4380-ad73-362c5f1b704f","Type":"ContainerStarted","Data":"a5bfe3766d5f06227ee56bbba21725a84c7d8f0e6ec9fd7edbd9a3134280f3eb"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.030321 4720 generic.go:334] "Generic (PLEG): container finished" podID="35455de2-123c-442f-88de-e3fa878b3c09" containerID="45da46541162537aa7a5f0feb9ec1d7569c456e5dbede5a0993db85a6fd919b9" exitCode=0 Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.030369 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b979n" event={"ID":"35455de2-123c-442f-88de-e3fa878b3c09","Type":"ContainerDied","Data":"45da46541162537aa7a5f0feb9ec1d7569c456e5dbede5a0993db85a6fd919b9"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.036335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-774qf" event={"ID":"57c88c8b-430e-40d7-9598-464d1dbead23","Type":"ContainerStarted","Data":"679b71572423c16a08d1a64891c5346adc7570061f776ba0c827cf1cadab8aee"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.036559 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-774qf" Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.041051 4720 generic.go:334] "Generic (PLEG): container finished" podID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerID="391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf" exitCode=0 Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.041225 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" event={"ID":"88f6155b-8daf-44bb-a369-b07421ae38b2","Type":"ContainerDied","Data":"391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.041250 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" event={"ID":"88f6155b-8daf-44bb-a369-b07421ae38b2","Type":"ContainerStarted","Data":"e37ff2d0375c5e5990ff6d043e94a945b7a38842d8bf76b2514af1c808753c03"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.046174 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mgchh" event={"ID":"3b570bee-e4d7-4d5a-98a2-939066b0dff4","Type":"ContainerStarted","Data":"692d4b22cf5e9708c566d4308fa0ec2d15325b8925da1b91bb3303ca65c63db0"} Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.052585 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.221749853 podStartE2EDuration="27.052569883s" podCreationTimestamp="2026-02-02 09:14:06 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.454116585 +0000 UTC m=+1089.309742141" lastFinishedPulling="2026-02-02 09:14:23.284936615 +0000 UTC m=+1097.140562171" observedRunningTime="2026-02-02 09:14:33.051025609 +0000 UTC m=+1106.906651165" watchObservedRunningTime="2026-02-02 09:14:33.052569883 +0000 UTC m=+1106.908195439" Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.052626 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"871b4d00-52ff-41e8-9e5a-6f1e567dcef5","Type":"ContainerStarted","Data":"fa232e81ed69e30c4fc18430b1e010f1aa974676a42300c96bfdc21b3e569f8d"} Feb 02 09:14:33 crc kubenswrapper[4720]: E0202 09:14:33.056974 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" Feb 02 09:14:33 crc kubenswrapper[4720]: I0202 09:14:33.077176 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-774qf" podStartSLOduration=6.075180613 podStartE2EDuration="22.077152434s" podCreationTimestamp="2026-02-02 09:14:11 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.623488322 +0000 UTC m=+1089.479113868" lastFinishedPulling="2026-02-02 09:14:31.625460133 +0000 UTC m=+1105.481085689" observedRunningTime="2026-02-02 09:14:33.069516552 +0000 UTC m=+1106.925142108" watchObservedRunningTime="2026-02-02 09:14:33.077152434 +0000 UTC m=+1106.932777990" Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.058189 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"871b4d00-52ff-41e8-9e5a-6f1e567dcef5","Type":"ContainerStarted","Data":"d85748dd7f47d6a1462f1bae14c6f456bbe553575a6846936af59493ef7d5891"} Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.060833 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b979n" event={"ID":"35455de2-123c-442f-88de-e3fa878b3c09","Type":"ContainerStarted","Data":"25a84655e762c4b53bee3b33be3836cb3ea3e0b5ef0233ce0f6a0d7516fbecb1"} Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.065584 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" event={"ID":"88f6155b-8daf-44bb-a369-b07421ae38b2","Type":"ContainerStarted","Data":"bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1"} Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.065736 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.068025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" event={"ID":"2641f426-af38-4405-8986-3edf8b8401db","Type":"ContainerStarted","Data":"a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849"} Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.068269 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.071183 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cda7a8a-d405-4c4f-b8c2-bf75323634b9","Type":"ContainerStarted","Data":"bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369"} Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.074226 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"527ad190-1f46-4b04-8379-72f150ba294d","Type":"ContainerStarted","Data":"3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1"} Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.093961 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.959062438 podStartE2EDuration="23.093937254s" podCreationTimestamp="2026-02-02 09:14:11 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.710313652 +0000 UTC m=+1089.565939208" lastFinishedPulling="2026-02-02 09:14:33.845188448 +0000 UTC m=+1107.700814024" observedRunningTime="2026-02-02 09:14:34.087335416 +0000 UTC m=+1107.942960992" watchObservedRunningTime="2026-02-02 09:14:34.093937254 +0000 UTC m=+1107.949562860" Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.158628 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" podStartSLOduration=17.15860963 podStartE2EDuration="17.15860963s" podCreationTimestamp="2026-02-02 09:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:14:34.154083189 +0000 UTC m=+1108.009708745" watchObservedRunningTime="2026-02-02 09:14:34.15860963 +0000 UTC m=+1108.014235186" Feb 02 09:14:34 crc kubenswrapper[4720]: I0202 09:14:34.179011 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" podStartSLOduration=17.178996627 podStartE2EDuration="17.178996627s" podCreationTimestamp="2026-02-02 09:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:14:34.175365135 +0000 UTC m=+1108.030990711" watchObservedRunningTime="2026-02-02 09:14:34.178996627 +0000 UTC m=+1108.034622183" Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.089488 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b979n" event={"ID":"35455de2-123c-442f-88de-e3fa878b3c09","Type":"ContainerStarted","Data":"ffe7ae245c69bd42c50fbf06512169346b026a7909eb8c56dd0e248f3df21f53"} Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.090485 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.093037 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mgchh" event={"ID":"3b570bee-e4d7-4d5a-98a2-939066b0dff4","Type":"ContainerStarted","Data":"b9526115b8840403239d96b0927e3dfb503ce0b55ca8ee2458679b89782aecca"} Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.098300 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d1e015f3-dd0c-4380-ad73-362c5f1b704f","Type":"ContainerStarted","Data":"19dd64960e6cdead090c98692cdda81cd1dc10b9040a1e6d34f4a4d9c97e7722"} Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.123611 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-b979n" podStartSLOduration=8.472661633 podStartE2EDuration="24.123584641s" podCreationTimestamp="2026-02-02 09:14:11 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.872608134 +0000 UTC m=+1089.728233690" lastFinishedPulling="2026-02-02 09:14:31.523531142 +0000 UTC m=+1105.379156698" observedRunningTime="2026-02-02 09:14:35.121637338 +0000 UTC m=+1108.977262904" watchObservedRunningTime="2026-02-02 09:14:35.123584641 +0000 UTC m=+1108.979210227" Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.153767 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.4205240359999998 podStartE2EDuration="20.153740896s" podCreationTimestamp="2026-02-02 09:14:15 +0000 UTC" firstStartedPulling="2026-02-02 09:14:17.09645442 +0000 UTC m=+1090.952079976" lastFinishedPulling="2026-02-02 09:14:33.82967126 +0000 UTC m=+1107.685296836" observedRunningTime="2026-02-02 09:14:35.149411009 +0000 UTC m=+1109.005036575" watchObservedRunningTime="2026-02-02 09:14:35.153740896 +0000 UTC m=+1109.009366492" Feb 02 09:14:35 crc kubenswrapper[4720]: I0202 09:14:35.175091 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mgchh" podStartSLOduration=16.451853076 podStartE2EDuration="18.175064153s" podCreationTimestamp="2026-02-02 09:14:17 +0000 UTC" firstStartedPulling="2026-02-02 09:14:32.127330651 +0000 UTC m=+1105.982956207" lastFinishedPulling="2026-02-02 09:14:33.850541718 +0000 UTC m=+1107.706167284" observedRunningTime="2026-02-02 09:14:35.173194871 +0000 UTC m=+1109.028820467" watchObservedRunningTime="2026-02-02 09:14:35.175064153 +0000 UTC m=+1109.030689749" Feb 02 09:14:36 crc kubenswrapper[4720]: I0202 09:14:36.107541 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:14:36 crc kubenswrapper[4720]: I0202 09:14:36.497538 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:36 crc kubenswrapper[4720]: I0202 09:14:36.942250 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:37 crc kubenswrapper[4720]: I0202 09:14:37.011762 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:37 crc kubenswrapper[4720]: I0202 09:14:37.115335 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:37 crc kubenswrapper[4720]: I0202 09:14:37.158453 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 09:14:37 crc kubenswrapper[4720]: I0202 09:14:37.497511 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:37 crc kubenswrapper[4720]: I0202 09:14:37.562120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.220376 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.546338 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 09:14:38 crc kubenswrapper[4720]: E0202 09:14:38.547146 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="init" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.547177 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="init" Feb 02 09:14:38 crc kubenswrapper[4720]: E0202 09:14:38.547231 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="dnsmasq-dns" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.547243 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="dnsmasq-dns" Feb 02 09:14:38 crc kubenswrapper[4720]: E0202 09:14:38.547260 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerName="dnsmasq-dns" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.547271 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerName="dnsmasq-dns" Feb 02 09:14:38 crc kubenswrapper[4720]: E0202 09:14:38.547335 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerName="init" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.547348 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerName="init" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.547597 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3c766a-6925-496d-bf40-1d93aa6a1d8c" containerName="dnsmasq-dns" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.547624 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13c3615-2a60-40a6-b6a4-a5c410520373" containerName="dnsmasq-dns" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.548830 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.552154 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vf9mp" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.552211 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.552265 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.552750 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.578550 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659415 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659473 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnm8\" (UniqueName: \"kubernetes.io/projected/d349fb1c-3289-47a0-a5ec-525740680f69-kube-api-access-5gnm8\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659509 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d349fb1c-3289-47a0-a5ec-525740680f69-scripts\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659538 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d349fb1c-3289-47a0-a5ec-525740680f69-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659599 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d349fb1c-3289-47a0-a5ec-525740680f69-config\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659616 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.659635 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.760921 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d349fb1c-3289-47a0-a5ec-525740680f69-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761003 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d349fb1c-3289-47a0-a5ec-525740680f69-config\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761019 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761038 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761077 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gnm8\" (UniqueName: \"kubernetes.io/projected/d349fb1c-3289-47a0-a5ec-525740680f69-kube-api-access-5gnm8\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761144 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d349fb1c-3289-47a0-a5ec-525740680f69-scripts\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761615 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d349fb1c-3289-47a0-a5ec-525740680f69-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.761999 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d349fb1c-3289-47a0-a5ec-525740680f69-scripts\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.762150 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d349fb1c-3289-47a0-a5ec-525740680f69-config\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.768441 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.769527 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.775118 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d349fb1c-3289-47a0-a5ec-525740680f69-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.780617 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gnm8\" (UniqueName: \"kubernetes.io/projected/d349fb1c-3289-47a0-a5ec-525740680f69-kube-api-access-5gnm8\") pod \"ovn-northd-0\" (UID: \"d349fb1c-3289-47a0-a5ec-525740680f69\") " pod="openstack/ovn-northd-0" Feb 02 09:14:38 crc kubenswrapper[4720]: I0202 09:14:38.876427 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 09:14:39 crc kubenswrapper[4720]: I0202 09:14:39.353090 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 09:14:39 crc kubenswrapper[4720]: W0202 09:14:39.360220 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd349fb1c_3289_47a0_a5ec_525740680f69.slice/crio-c9d2aeb2f3af5c583b4b3c00332adf3e99f425ed28d0b0eca31a3e695f065bdc WatchSource:0}: Error finding container c9d2aeb2f3af5c583b4b3c00332adf3e99f425ed28d0b0eca31a3e695f065bdc: Status 404 returned error can't find the container with id c9d2aeb2f3af5c583b4b3c00332adf3e99f425ed28d0b0eca31a3e695f065bdc Feb 02 09:14:40 crc kubenswrapper[4720]: I0202 09:14:40.178131 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d349fb1c-3289-47a0-a5ec-525740680f69","Type":"ContainerStarted","Data":"c9d2aeb2f3af5c583b4b3c00332adf3e99f425ed28d0b0eca31a3e695f065bdc"} Feb 02 09:14:41 crc kubenswrapper[4720]: I0202 09:14:41.191368 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d349fb1c-3289-47a0-a5ec-525740680f69","Type":"ContainerStarted","Data":"9a4fd268b8e18a1141602f0b8319e1242fbb4258bbe710d3641f965f6d25dad0"} Feb 02 09:14:41 crc kubenswrapper[4720]: I0202 09:14:41.191725 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d349fb1c-3289-47a0-a5ec-525740680f69","Type":"ContainerStarted","Data":"16ba8a4bc7e9a957c9854da2194e477ed0c561de21dc7ff93d051c0e13a8d4a9"} Feb 02 09:14:41 crc kubenswrapper[4720]: I0202 09:14:41.191979 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 09:14:41 crc kubenswrapper[4720]: I0202 09:14:41.223776 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.244077469 podStartE2EDuration="3.223743718s" podCreationTimestamp="2026-02-02 09:14:38 +0000 UTC" firstStartedPulling="2026-02-02 09:14:39.362453263 +0000 UTC m=+1113.218078819" lastFinishedPulling="2026-02-02 09:14:40.342119512 +0000 UTC m=+1114.197745068" observedRunningTime="2026-02-02 09:14:41.216341993 +0000 UTC m=+1115.071967589" watchObservedRunningTime="2026-02-02 09:14:41.223743718 +0000 UTC m=+1115.079369314" Feb 02 09:14:41 crc kubenswrapper[4720]: I0202 09:14:41.696004 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 09:14:42 crc kubenswrapper[4720]: I0202 09:14:42.826223 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:42 crc kubenswrapper[4720]: I0202 09:14:42.955129 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.029020 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8xztx"] Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.209406 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerName="dnsmasq-dns" containerID="cri-o://bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1" gracePeriod=10 Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.796290 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.863440 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-dns-svc\") pod \"88f6155b-8daf-44bb-a369-b07421ae38b2\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.863529 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd2m6\" (UniqueName: \"kubernetes.io/projected/88f6155b-8daf-44bb-a369-b07421ae38b2-kube-api-access-sd2m6\") pod \"88f6155b-8daf-44bb-a369-b07421ae38b2\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.863566 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-ovsdbserver-nb\") pod \"88f6155b-8daf-44bb-a369-b07421ae38b2\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.863604 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-config\") pod \"88f6155b-8daf-44bb-a369-b07421ae38b2\" (UID: \"88f6155b-8daf-44bb-a369-b07421ae38b2\") " Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.873483 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f6155b-8daf-44bb-a369-b07421ae38b2-kube-api-access-sd2m6" (OuterVolumeSpecName: "kube-api-access-sd2m6") pod "88f6155b-8daf-44bb-a369-b07421ae38b2" (UID: "88f6155b-8daf-44bb-a369-b07421ae38b2"). InnerVolumeSpecName "kube-api-access-sd2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.916272 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88f6155b-8daf-44bb-a369-b07421ae38b2" (UID: "88f6155b-8daf-44bb-a369-b07421ae38b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.919031 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-config" (OuterVolumeSpecName: "config") pod "88f6155b-8daf-44bb-a369-b07421ae38b2" (UID: "88f6155b-8daf-44bb-a369-b07421ae38b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.925391 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88f6155b-8daf-44bb-a369-b07421ae38b2" (UID: "88f6155b-8daf-44bb-a369-b07421ae38b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.966467 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd2m6\" (UniqueName: \"kubernetes.io/projected/88f6155b-8daf-44bb-a369-b07421ae38b2-kube-api-access-sd2m6\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.966729 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.966741 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:43 crc kubenswrapper[4720]: I0202 09:14:43.966749 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88f6155b-8daf-44bb-a369-b07421ae38b2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.225426 4720 generic.go:334] "Generic (PLEG): container finished" podID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerID="bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1" exitCode=0 Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.225474 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.225499 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" event={"ID":"88f6155b-8daf-44bb-a369-b07421ae38b2","Type":"ContainerDied","Data":"bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1"} Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.225549 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8xztx" event={"ID":"88f6155b-8daf-44bb-a369-b07421ae38b2","Type":"ContainerDied","Data":"e37ff2d0375c5e5990ff6d043e94a945b7a38842d8bf76b2514af1c808753c03"} Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.225582 4720 scope.go:117] "RemoveContainer" containerID="bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.262547 4720 scope.go:117] "RemoveContainer" containerID="391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.281547 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8xztx"] Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.292036 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8xztx"] Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.293806 4720 scope.go:117] "RemoveContainer" containerID="bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1" Feb 02 09:14:44 crc kubenswrapper[4720]: E0202 09:14:44.294705 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1\": container with ID starting with bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1 not found: ID does not exist" containerID="bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.294750 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1"} err="failed to get container status \"bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1\": rpc error: code = NotFound desc = could not find container \"bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1\": container with ID starting with bb07f11f208fc5e9725c7ded4874177d3cc48febe0aa3f82aecb860d7cd480b1 not found: ID does not exist" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.294785 4720 scope.go:117] "RemoveContainer" containerID="391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf" Feb 02 09:14:44 crc kubenswrapper[4720]: E0202 09:14:44.295515 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf\": container with ID starting with 391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf not found: ID does not exist" containerID="391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.295656 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf"} err="failed to get container status \"391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf\": rpc error: code = NotFound desc = could not find container \"391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf\": container with ID starting with 391c2cd03c876630d22e8784c344d134550901cd91bed1c7b3cf9452bbd5feaf not found: ID does not exist" Feb 02 09:14:44 crc kubenswrapper[4720]: E0202 09:14:44.464241 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88f6155b_8daf_44bb_a369_b07421ae38b2.slice\": RecentStats: unable to find data in memory cache]" Feb 02 09:14:44 crc kubenswrapper[4720]: I0202 09:14:44.904402 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" path="/var/lib/kubelet/pods/88f6155b-8daf-44bb-a369-b07421ae38b2/volumes" Feb 02 09:14:47 crc kubenswrapper[4720]: I0202 09:14:47.258032 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26b9fd3f-f554-4920-ba34-8e8dc34b78ed","Type":"ContainerStarted","Data":"e44c1ea25149a26f3ac38db981f5b3ff788d28903df82739cf3e2405c31b3b2a"} Feb 02 09:14:47 crc kubenswrapper[4720]: I0202 09:14:47.258633 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 09:14:47 crc kubenswrapper[4720]: I0202 09:14:47.278397 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.630424745 podStartE2EDuration="39.278377086s" podCreationTimestamp="2026-02-02 09:14:08 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.620976619 +0000 UTC m=+1089.476602175" lastFinishedPulling="2026-02-02 09:14:46.26892892 +0000 UTC m=+1120.124554516" observedRunningTime="2026-02-02 09:14:47.278166502 +0000 UTC m=+1121.133792098" watchObservedRunningTime="2026-02-02 09:14:47.278377086 +0000 UTC m=+1121.134002642" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.277819 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29c13267-2f9e-4e1c-b52f-66be31da5155","Type":"ContainerStarted","Data":"f0d3cbd39eb49b97e41f81407930d2d80f8617138d85eea78dfee21df8b38ff6"} Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.282581 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06","Type":"ContainerStarted","Data":"460ef06d69b1ede10f9b8a62f967ebeab372e07aa58945d9d4cbb7e4fc05c3ae"} Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.475785 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-fcsps"] Feb 02 09:14:48 crc kubenswrapper[4720]: E0202 09:14:48.476216 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerName="init" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.476250 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerName="init" Feb 02 09:14:48 crc kubenswrapper[4720]: E0202 09:14:48.476275 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerName="dnsmasq-dns" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.476282 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerName="dnsmasq-dns" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.476453 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f6155b-8daf-44bb-a369-b07421ae38b2" containerName="dnsmasq-dns" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.477282 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.500165 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fcsps"] Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.550267 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.550317 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gnp\" (UniqueName: \"kubernetes.io/projected/8547591d-9191-4d26-83e2-17cbc78ec126-kube-api-access-69gnp\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.550343 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-dns-svc\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.550359 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.550485 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-config\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.665079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.665172 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69gnp\" (UniqueName: \"kubernetes.io/projected/8547591d-9191-4d26-83e2-17cbc78ec126-kube-api-access-69gnp\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.665208 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-dns-svc\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.665230 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.665275 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-config\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.666311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.666328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-dns-svc\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.666315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.666371 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-config\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.683629 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69gnp\" (UniqueName: \"kubernetes.io/projected/8547591d-9191-4d26-83e2-17cbc78ec126-kube-api-access-69gnp\") pod \"dnsmasq-dns-698758b865-fcsps\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:48 crc kubenswrapper[4720]: I0202 09:14:48.830421 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.320214 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fcsps"] Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.649171 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.654576 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.656859 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.657102 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.659801 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.660397 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gvltb" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.676860 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.784201 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.784269 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmsp\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-kube-api-access-wvmsp\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.784310 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bae269-30fb-4c0c-8e00-717f68ef2b01-lock\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.784334 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bae269-30fb-4c0c-8e00-717f68ef2b01-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.784369 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.784385 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bae269-30fb-4c0c-8e00-717f68ef2b01-cache\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886023 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmsp\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-kube-api-access-wvmsp\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bae269-30fb-4c0c-8e00-717f68ef2b01-lock\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886185 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bae269-30fb-4c0c-8e00-717f68ef2b01-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886261 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bae269-30fb-4c0c-8e00-717f68ef2b01-cache\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: E0202 09:14:49.886403 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 09:14:49 crc kubenswrapper[4720]: E0202 09:14:49.886423 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 09:14:49 crc kubenswrapper[4720]: E0202 09:14:49.886468 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift podName:90bae269-30fb-4c0c-8e00-717f68ef2b01 nodeName:}" failed. No retries permitted until 2026-02-02 09:14:50.38644994 +0000 UTC m=+1124.242075506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift") pod "swift-storage-0" (UID: "90bae269-30fb-4c0c-8e00-717f68ef2b01") : configmap "swift-ring-files" not found Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886406 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.886768 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.887106 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/90bae269-30fb-4c0c-8e00-717f68ef2b01-lock\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.887169 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/90bae269-30fb-4c0c-8e00-717f68ef2b01-cache\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.895435 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90bae269-30fb-4c0c-8e00-717f68ef2b01-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.905822 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmsp\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-kube-api-access-wvmsp\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:49 crc kubenswrapper[4720]: I0202 09:14:49.916842 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.128214 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v7qgg"] Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.129808 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.132388 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.132593 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.136069 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.156854 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v7qgg"] Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.192794 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80e41dc-2fd4-4987-9ec7-53addd3b9048-etc-swift\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.192934 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-dispersionconf\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.192970 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-combined-ca-bundle\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.192998 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-swiftconf\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.193022 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-ring-data-devices\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.193071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-scripts\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.193111 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhcx\" (UniqueName: \"kubernetes.io/projected/f80e41dc-2fd4-4987-9ec7-53addd3b9048-kube-api-access-nxhcx\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.294680 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80e41dc-2fd4-4987-9ec7-53addd3b9048-etc-swift\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.294850 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-dispersionconf\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.295323 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80e41dc-2fd4-4987-9ec7-53addd3b9048-etc-swift\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.295447 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-combined-ca-bundle\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.295506 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-swiftconf\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.295536 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-ring-data-devices\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.295666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-scripts\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.295742 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxhcx\" (UniqueName: \"kubernetes.io/projected/f80e41dc-2fd4-4987-9ec7-53addd3b9048-kube-api-access-nxhcx\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.296928 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-scripts\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.298210 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-ring-data-devices\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.298466 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-swiftconf\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.299861 4720 generic.go:334] "Generic (PLEG): container finished" podID="8547591d-9191-4d26-83e2-17cbc78ec126" containerID="602ba461b7641e239dea25fd7e7bcedd3950fa631d75587f8accb918b8fd56a9" exitCode=0 Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.299963 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fcsps" event={"ID":"8547591d-9191-4d26-83e2-17cbc78ec126","Type":"ContainerDied","Data":"602ba461b7641e239dea25fd7e7bcedd3950fa631d75587f8accb918b8fd56a9"} Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.300003 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fcsps" event={"ID":"8547591d-9191-4d26-83e2-17cbc78ec126","Type":"ContainerStarted","Data":"7670ae41c486d430d8b6b38ee386e38d5c14f27ca2f6b9a6e8a59110a4b31ac2"} Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.301330 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-combined-ca-bundle\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.314246 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-dispersionconf\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.319006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxhcx\" (UniqueName: \"kubernetes.io/projected/f80e41dc-2fd4-4987-9ec7-53addd3b9048-kube-api-access-nxhcx\") pod \"swift-ring-rebalance-v7qgg\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.397556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:50 crc kubenswrapper[4720]: E0202 09:14:50.397717 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 09:14:50 crc kubenswrapper[4720]: E0202 09:14:50.397735 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 09:14:50 crc kubenswrapper[4720]: E0202 09:14:50.397777 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift podName:90bae269-30fb-4c0c-8e00-717f68ef2b01 nodeName:}" failed. No retries permitted until 2026-02-02 09:14:51.39776312 +0000 UTC m=+1125.253388666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift") pod "swift-storage-0" (UID: "90bae269-30fb-4c0c-8e00-717f68ef2b01") : configmap "swift-ring-files" not found Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.470552 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:14:50 crc kubenswrapper[4720]: I0202 09:14:50.948946 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v7qgg"] Feb 02 09:14:51 crc kubenswrapper[4720]: W0202 09:14:51.081195 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf80e41dc_2fd4_4987_9ec7_53addd3b9048.slice/crio-9f942acc10358b6f484d117acbba2ed18bcab1937fcbd48c537964c296a4c056 WatchSource:0}: Error finding container 9f942acc10358b6f484d117acbba2ed18bcab1937fcbd48c537964c296a4c056: Status 404 returned error can't find the container with id 9f942acc10358b6f484d117acbba2ed18bcab1937fcbd48c537964c296a4c056 Feb 02 09:14:51 crc kubenswrapper[4720]: I0202 09:14:51.332579 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fcsps" event={"ID":"8547591d-9191-4d26-83e2-17cbc78ec126","Type":"ContainerStarted","Data":"c54986bc443adbc25acefdad6156df95b0b0e3a23dd2b068bb83761d5827089e"} Feb 02 09:14:51 crc kubenswrapper[4720]: I0202 09:14:51.335283 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7qgg" event={"ID":"f80e41dc-2fd4-4987-9ec7-53addd3b9048","Type":"ContainerStarted","Data":"9f942acc10358b6f484d117acbba2ed18bcab1937fcbd48c537964c296a4c056"} Feb 02 09:14:51 crc kubenswrapper[4720]: I0202 09:14:51.357467 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-fcsps" podStartSLOduration=3.357446832 podStartE2EDuration="3.357446832s" podCreationTimestamp="2026-02-02 09:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:14:51.3506771 +0000 UTC m=+1125.206302656" watchObservedRunningTime="2026-02-02 09:14:51.357446832 +0000 UTC m=+1125.213072388" Feb 02 09:14:51 crc kubenswrapper[4720]: I0202 09:14:51.426752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:51 crc kubenswrapper[4720]: E0202 09:14:51.426958 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 09:14:51 crc kubenswrapper[4720]: E0202 09:14:51.427780 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 09:14:51 crc kubenswrapper[4720]: E0202 09:14:51.427838 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift podName:90bae269-30fb-4c0c-8e00-717f68ef2b01 nodeName:}" failed. No retries permitted until 2026-02-02 09:14:53.427818756 +0000 UTC m=+1127.283444312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift") pod "swift-storage-0" (UID: "90bae269-30fb-4c0c-8e00-717f68ef2b01") : configmap "swift-ring-files" not found Feb 02 09:14:52 crc kubenswrapper[4720]: I0202 09:14:52.345087 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:53 crc kubenswrapper[4720]: I0202 09:14:53.354765 4720 generic.go:334] "Generic (PLEG): container finished" podID="289905c2-8b8c-4d85-a9d4-19ac7c9b9b06" containerID="460ef06d69b1ede10f9b8a62f967ebeab372e07aa58945d9d4cbb7e4fc05c3ae" exitCode=0 Feb 02 09:14:53 crc kubenswrapper[4720]: I0202 09:14:53.354824 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06","Type":"ContainerDied","Data":"460ef06d69b1ede10f9b8a62f967ebeab372e07aa58945d9d4cbb7e4fc05c3ae"} Feb 02 09:14:53 crc kubenswrapper[4720]: I0202 09:14:53.357505 4720 generic.go:334] "Generic (PLEG): container finished" podID="29c13267-2f9e-4e1c-b52f-66be31da5155" containerID="f0d3cbd39eb49b97e41f81407930d2d80f8617138d85eea78dfee21df8b38ff6" exitCode=0 Feb 02 09:14:53 crc kubenswrapper[4720]: I0202 09:14:53.357649 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29c13267-2f9e-4e1c-b52f-66be31da5155","Type":"ContainerDied","Data":"f0d3cbd39eb49b97e41f81407930d2d80f8617138d85eea78dfee21df8b38ff6"} Feb 02 09:14:53 crc kubenswrapper[4720]: I0202 09:14:53.466223 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:53 crc kubenswrapper[4720]: E0202 09:14:53.466457 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 09:14:53 crc kubenswrapper[4720]: E0202 09:14:53.466492 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 09:14:53 crc kubenswrapper[4720]: E0202 09:14:53.466544 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift podName:90bae269-30fb-4c0c-8e00-717f68ef2b01 nodeName:}" failed. No retries permitted until 2026-02-02 09:14:57.46652619 +0000 UTC m=+1131.322151756 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift") pod "swift-storage-0" (UID: "90bae269-30fb-4c0c-8e00-717f68ef2b01") : configmap "swift-ring-files" not found Feb 02 09:14:55 crc kubenswrapper[4720]: I0202 09:14:55.400954 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29c13267-2f9e-4e1c-b52f-66be31da5155","Type":"ContainerStarted","Data":"8a681bdfbdb345e985d6122d29399990b938694017f73b40bcc03d987b7f274a"} Feb 02 09:14:55 crc kubenswrapper[4720]: I0202 09:14:55.403220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7qgg" event={"ID":"f80e41dc-2fd4-4987-9ec7-53addd3b9048","Type":"ContainerStarted","Data":"458f989d40a12d95dd13a042eab8e52317359659014cc95c81c45a31d6f79fa5"} Feb 02 09:14:55 crc kubenswrapper[4720]: I0202 09:14:55.406683 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"289905c2-8b8c-4d85-a9d4-19ac7c9b9b06","Type":"ContainerStarted","Data":"511c4a6c100294197b96fc42dfda8ffb64b0941ba9f47609a0e8fe91962756bc"} Feb 02 09:14:55 crc kubenswrapper[4720]: I0202 09:14:55.441653 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.467493343 podStartE2EDuration="51.441633963s" podCreationTimestamp="2026-02-02 09:14:04 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.429586151 +0000 UTC m=+1089.285211707" lastFinishedPulling="2026-02-02 09:14:47.403726761 +0000 UTC m=+1121.259352327" observedRunningTime="2026-02-02 09:14:55.434534514 +0000 UTC m=+1129.290160080" watchObservedRunningTime="2026-02-02 09:14:55.441633963 +0000 UTC m=+1129.297259519" Feb 02 09:14:55 crc kubenswrapper[4720]: I0202 09:14:55.472108 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v7qgg" podStartSLOduration=2.099474456 podStartE2EDuration="5.472082984s" podCreationTimestamp="2026-02-02 09:14:50 +0000 UTC" firstStartedPulling="2026-02-02 09:14:51.084209149 +0000 UTC m=+1124.939834715" lastFinishedPulling="2026-02-02 09:14:54.456817687 +0000 UTC m=+1128.312443243" observedRunningTime="2026-02-02 09:14:55.458184233 +0000 UTC m=+1129.313809829" watchObservedRunningTime="2026-02-02 09:14:55.472082984 +0000 UTC m=+1129.327708570" Feb 02 09:14:55 crc kubenswrapper[4720]: I0202 09:14:55.493692 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.466424737 podStartE2EDuration="52.493666567s" podCreationTimestamp="2026-02-02 09:14:03 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.455656923 +0000 UTC m=+1089.311282489" lastFinishedPulling="2026-02-02 09:14:47.482898763 +0000 UTC m=+1121.338524319" observedRunningTime="2026-02-02 09:14:55.483754695 +0000 UTC m=+1129.339380331" watchObservedRunningTime="2026-02-02 09:14:55.493666567 +0000 UTC m=+1129.349292153" Feb 02 09:14:56 crc kubenswrapper[4720]: I0202 09:14:56.340996 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:56 crc kubenswrapper[4720]: I0202 09:14:56.341176 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 09:14:57 crc kubenswrapper[4720]: I0202 09:14:57.535483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:14:57 crc kubenswrapper[4720]: E0202 09:14:57.535647 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 09:14:57 crc kubenswrapper[4720]: E0202 09:14:57.535668 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 09:14:57 crc kubenswrapper[4720]: E0202 09:14:57.535718 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift podName:90bae269-30fb-4c0c-8e00-717f68ef2b01 nodeName:}" failed. No retries permitted until 2026-02-02 09:15:05.535701315 +0000 UTC m=+1139.391326881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift") pod "swift-storage-0" (UID: "90bae269-30fb-4c0c-8e00-717f68ef2b01") : configmap "swift-ring-files" not found Feb 02 09:14:58 crc kubenswrapper[4720]: I0202 09:14:58.421465 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 09:14:58 crc kubenswrapper[4720]: I0202 09:14:58.832236 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:14:58 crc kubenswrapper[4720]: I0202 09:14:58.883530 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bw85"] Feb 02 09:14:58 crc kubenswrapper[4720]: I0202 09:14:58.883802 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" podUID="2641f426-af38-4405-8986-3edf8b8401db" containerName="dnsmasq-dns" containerID="cri-o://a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849" gracePeriod=10 Feb 02 09:14:58 crc kubenswrapper[4720]: I0202 09:14:58.957914 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.362430 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.386236 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-dns-svc\") pod \"2641f426-af38-4405-8986-3edf8b8401db\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.386364 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-config\") pod \"2641f426-af38-4405-8986-3edf8b8401db\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.386485 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-sb\") pod \"2641f426-af38-4405-8986-3edf8b8401db\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.386535 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nkj2\" (UniqueName: \"kubernetes.io/projected/2641f426-af38-4405-8986-3edf8b8401db-kube-api-access-8nkj2\") pod \"2641f426-af38-4405-8986-3edf8b8401db\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.386654 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-nb\") pod \"2641f426-af38-4405-8986-3edf8b8401db\" (UID: \"2641f426-af38-4405-8986-3edf8b8401db\") " Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.423192 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2641f426-af38-4405-8986-3edf8b8401db-kube-api-access-8nkj2" (OuterVolumeSpecName: "kube-api-access-8nkj2") pod "2641f426-af38-4405-8986-3edf8b8401db" (UID: "2641f426-af38-4405-8986-3edf8b8401db"). InnerVolumeSpecName "kube-api-access-8nkj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.441304 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2641f426-af38-4405-8986-3edf8b8401db" (UID: "2641f426-af38-4405-8986-3edf8b8401db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.441547 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2641f426-af38-4405-8986-3edf8b8401db" (UID: "2641f426-af38-4405-8986-3edf8b8401db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.443619 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2641f426-af38-4405-8986-3edf8b8401db" (UID: "2641f426-af38-4405-8986-3edf8b8401db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.455901 4720 generic.go:334] "Generic (PLEG): container finished" podID="2641f426-af38-4405-8986-3edf8b8401db" containerID="a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849" exitCode=0 Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.455962 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" event={"ID":"2641f426-af38-4405-8986-3edf8b8401db","Type":"ContainerDied","Data":"a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849"} Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.455989 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" event={"ID":"2641f426-af38-4405-8986-3edf8b8401db","Type":"ContainerDied","Data":"f28c8c49e1559669a90daa0e6d3a930d1433a4409416988a19070211201b53f0"} Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.456005 4720 scope.go:117] "RemoveContainer" containerID="a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.458034 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bw85" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.461420 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-config" (OuterVolumeSpecName: "config") pod "2641f426-af38-4405-8986-3edf8b8401db" (UID: "2641f426-af38-4405-8986-3edf8b8401db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.483964 4720 scope.go:117] "RemoveContainer" containerID="d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.487761 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.487802 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.487817 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.487833 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nkj2\" (UniqueName: \"kubernetes.io/projected/2641f426-af38-4405-8986-3edf8b8401db-kube-api-access-8nkj2\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.487846 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2641f426-af38-4405-8986-3edf8b8401db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.511990 4720 scope.go:117] "RemoveContainer" containerID="a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849" Feb 02 09:14:59 crc kubenswrapper[4720]: E0202 09:14:59.513125 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849\": container with ID starting with a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849 not found: ID does not exist" containerID="a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.513198 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849"} err="failed to get container status \"a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849\": rpc error: code = NotFound desc = could not find container \"a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849\": container with ID starting with a4410eaf4c4d2953a2591c5838c97b2c59f217668d806ee9ecc446d8728f1849 not found: ID does not exist" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.513255 4720 scope.go:117] "RemoveContainer" containerID="d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3" Feb 02 09:14:59 crc kubenswrapper[4720]: E0202 09:14:59.513645 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3\": container with ID starting with d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3 not found: ID does not exist" containerID="d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.513681 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3"} err="failed to get container status \"d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3\": rpc error: code = NotFound desc = could not find container \"d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3\": container with ID starting with d1c57c7cac972119c1fecc8b15da53e8b881204c0315b10e6aa7355f6512f6f3 not found: ID does not exist" Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.821664 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bw85"] Feb 02 09:14:59 crc kubenswrapper[4720]: I0202 09:14:59.832197 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bw85"] Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.164332 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv"] Feb 02 09:15:00 crc kubenswrapper[4720]: E0202 09:15:00.164737 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2641f426-af38-4405-8986-3edf8b8401db" containerName="init" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.164765 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2641f426-af38-4405-8986-3edf8b8401db" containerName="init" Feb 02 09:15:00 crc kubenswrapper[4720]: E0202 09:15:00.164796 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2641f426-af38-4405-8986-3edf8b8401db" containerName="dnsmasq-dns" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.164806 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2641f426-af38-4405-8986-3edf8b8401db" containerName="dnsmasq-dns" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.165055 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2641f426-af38-4405-8986-3edf8b8401db" containerName="dnsmasq-dns" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.165680 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.168226 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.168520 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.185026 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv"] Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.200342 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24895420-6b66-40eb-8ec5-78f761760fe7-config-volume\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.301978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24895420-6b66-40eb-8ec5-78f761760fe7-config-volume\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.302075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghdl\" (UniqueName: \"kubernetes.io/projected/24895420-6b66-40eb-8ec5-78f761760fe7-kube-api-access-2ghdl\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.302131 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24895420-6b66-40eb-8ec5-78f761760fe7-secret-volume\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.302897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24895420-6b66-40eb-8ec5-78f761760fe7-config-volume\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: E0202 09:15:00.400168 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:54464->38.102.83.177:44747: write tcp 38.102.83.177:54464->38.102.83.177:44747: write: broken pipe Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.404234 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghdl\" (UniqueName: \"kubernetes.io/projected/24895420-6b66-40eb-8ec5-78f761760fe7-kube-api-access-2ghdl\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.404322 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24895420-6b66-40eb-8ec5-78f761760fe7-secret-volume\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.410815 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24895420-6b66-40eb-8ec5-78f761760fe7-secret-volume\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.431350 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghdl\" (UniqueName: \"kubernetes.io/projected/24895420-6b66-40eb-8ec5-78f761760fe7-kube-api-access-2ghdl\") pod \"collect-profiles-29500395-dpmkv\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.491860 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.504060 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.596565 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.904662 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2641f426-af38-4405-8986-3edf8b8401db" path="/var/lib/kubelet/pods/2641f426-af38-4405-8986-3edf8b8401db/volumes" Feb 02 09:15:00 crc kubenswrapper[4720]: I0202 09:15:00.985653 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv"] Feb 02 09:15:01 crc kubenswrapper[4720]: I0202 09:15:01.476055 4720 generic.go:334] "Generic (PLEG): container finished" podID="24895420-6b66-40eb-8ec5-78f761760fe7" containerID="749925605a100b9397d2ec9ab0c368e37bddf79696c2b1139426dc46facf1868" exitCode=0 Feb 02 09:15:01 crc kubenswrapper[4720]: I0202 09:15:01.476536 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" event={"ID":"24895420-6b66-40eb-8ec5-78f761760fe7","Type":"ContainerDied","Data":"749925605a100b9397d2ec9ab0c368e37bddf79696c2b1139426dc46facf1868"} Feb 02 09:15:01 crc kubenswrapper[4720]: I0202 09:15:01.476564 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" event={"ID":"24895420-6b66-40eb-8ec5-78f761760fe7","Type":"ContainerStarted","Data":"fea5d1f0de991deaeae4f3a8106ef8943e97c9015fb926c31651e2f1b7c3362d"} Feb 02 09:15:01 crc kubenswrapper[4720]: I0202 09:15:01.478357 4720 generic.go:334] "Generic (PLEG): container finished" podID="f80e41dc-2fd4-4987-9ec7-53addd3b9048" containerID="458f989d40a12d95dd13a042eab8e52317359659014cc95c81c45a31d6f79fa5" exitCode=0 Feb 02 09:15:01 crc kubenswrapper[4720]: I0202 09:15:01.478435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7qgg" event={"ID":"f80e41dc-2fd4-4987-9ec7-53addd3b9048","Type":"ContainerDied","Data":"458f989d40a12d95dd13a042eab8e52317359659014cc95c81c45a31d6f79fa5"} Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.081295 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-774qf" podUID="57c88c8b-430e-40d7-9598-464d1dbead23" containerName="ovn-controller" probeResult="failure" output=< Feb 02 09:15:02 crc kubenswrapper[4720]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 09:15:02 crc kubenswrapper[4720]: > Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.791194 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.913471 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.949995 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghdl\" (UniqueName: \"kubernetes.io/projected/24895420-6b66-40eb-8ec5-78f761760fe7-kube-api-access-2ghdl\") pod \"24895420-6b66-40eb-8ec5-78f761760fe7\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.950152 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24895420-6b66-40eb-8ec5-78f761760fe7-secret-volume\") pod \"24895420-6b66-40eb-8ec5-78f761760fe7\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.950270 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24895420-6b66-40eb-8ec5-78f761760fe7-config-volume\") pod \"24895420-6b66-40eb-8ec5-78f761760fe7\" (UID: \"24895420-6b66-40eb-8ec5-78f761760fe7\") " Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.951220 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24895420-6b66-40eb-8ec5-78f761760fe7-config-volume" (OuterVolumeSpecName: "config-volume") pod "24895420-6b66-40eb-8ec5-78f761760fe7" (UID: "24895420-6b66-40eb-8ec5-78f761760fe7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.958470 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24895420-6b66-40eb-8ec5-78f761760fe7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24895420-6b66-40eb-8ec5-78f761760fe7" (UID: "24895420-6b66-40eb-8ec5-78f761760fe7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:02 crc kubenswrapper[4720]: I0202 09:15:02.959167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24895420-6b66-40eb-8ec5-78f761760fe7-kube-api-access-2ghdl" (OuterVolumeSpecName: "kube-api-access-2ghdl") pod "24895420-6b66-40eb-8ec5-78f761760fe7" (UID: "24895420-6b66-40eb-8ec5-78f761760fe7"). InnerVolumeSpecName "kube-api-access-2ghdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052166 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-dispersionconf\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052223 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-combined-ca-bundle\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052304 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80e41dc-2fd4-4987-9ec7-53addd3b9048-etc-swift\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052350 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-swiftconf\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052368 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxhcx\" (UniqueName: \"kubernetes.io/projected/f80e41dc-2fd4-4987-9ec7-53addd3b9048-kube-api-access-nxhcx\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052403 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-scripts\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052442 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-ring-data-devices\") pod \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\" (UID: \"f80e41dc-2fd4-4987-9ec7-53addd3b9048\") " Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052959 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24895420-6b66-40eb-8ec5-78f761760fe7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052978 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghdl\" (UniqueName: \"kubernetes.io/projected/24895420-6b66-40eb-8ec5-78f761760fe7-kube-api-access-2ghdl\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.052991 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24895420-6b66-40eb-8ec5-78f761760fe7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.062363 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.066558 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80e41dc-2fd4-4987-9ec7-53addd3b9048-kube-api-access-nxhcx" (OuterVolumeSpecName: "kube-api-access-nxhcx") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "kube-api-access-nxhcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.067256 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80e41dc-2fd4-4987-9ec7-53addd3b9048-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.083222 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.083667 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.085593 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-scripts" (OuterVolumeSpecName: "scripts") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.086165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f80e41dc-2fd4-4987-9ec7-53addd3b9048" (UID: "f80e41dc-2fd4-4987-9ec7-53addd3b9048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155066 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155119 4720 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80e41dc-2fd4-4987-9ec7-53addd3b9048-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155142 4720 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155160 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155177 4720 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80e41dc-2fd4-4987-9ec7-53addd3b9048-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155193 4720 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80e41dc-2fd4-4987-9ec7-53addd3b9048-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.155211 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxhcx\" (UniqueName: \"kubernetes.io/projected/f80e41dc-2fd4-4987-9ec7-53addd3b9048-kube-api-access-nxhcx\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.498361 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" event={"ID":"24895420-6b66-40eb-8ec5-78f761760fe7","Type":"ContainerDied","Data":"fea5d1f0de991deaeae4f3a8106ef8943e97c9015fb926c31651e2f1b7c3362d"} Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.498435 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea5d1f0de991deaeae4f3a8106ef8943e97c9015fb926c31651e2f1b7c3362d" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.498484 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.500370 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7qgg" event={"ID":"f80e41dc-2fd4-4987-9ec7-53addd3b9048","Type":"ContainerDied","Data":"9f942acc10358b6f484d117acbba2ed18bcab1937fcbd48c537964c296a4c056"} Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.500408 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f942acc10358b6f484d117acbba2ed18bcab1937fcbd48c537964c296a4c056" Feb 02 09:15:03 crc kubenswrapper[4720]: I0202 09:15:03.500445 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7qgg" Feb 02 09:15:04 crc kubenswrapper[4720]: I0202 09:15:04.766827 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 09:15:04 crc kubenswrapper[4720]: I0202 09:15:04.766979 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 09:15:04 crc kubenswrapper[4720]: I0202 09:15:04.855434 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.095960 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g4q9p"] Feb 02 09:15:05 crc kubenswrapper[4720]: E0202 09:15:05.096824 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24895420-6b66-40eb-8ec5-78f761760fe7" containerName="collect-profiles" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.096854 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24895420-6b66-40eb-8ec5-78f761760fe7" containerName="collect-profiles" Feb 02 09:15:05 crc kubenswrapper[4720]: E0202 09:15:05.096930 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80e41dc-2fd4-4987-9ec7-53addd3b9048" containerName="swift-ring-rebalance" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.096944 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80e41dc-2fd4-4987-9ec7-53addd3b9048" containerName="swift-ring-rebalance" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.097207 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="24895420-6b66-40eb-8ec5-78f761760fe7" containerName="collect-profiles" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.097252 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80e41dc-2fd4-4987-9ec7-53addd3b9048" containerName="swift-ring-rebalance" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.098071 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.100989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.117649 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g4q9p"] Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.190649 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt45s\" (UniqueName: \"kubernetes.io/projected/e0e37249-5eec-48c8-9366-9a291edd750e-kube-api-access-jt45s\") pod \"root-account-create-update-g4q9p\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.191062 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e37249-5eec-48c8-9366-9a291edd750e-operator-scripts\") pod \"root-account-create-update-g4q9p\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.292651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e37249-5eec-48c8-9366-9a291edd750e-operator-scripts\") pod \"root-account-create-update-g4q9p\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.292825 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt45s\" (UniqueName: \"kubernetes.io/projected/e0e37249-5eec-48c8-9366-9a291edd750e-kube-api-access-jt45s\") pod \"root-account-create-update-g4q9p\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.294812 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e37249-5eec-48c8-9366-9a291edd750e-operator-scripts\") pod \"root-account-create-update-g4q9p\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.319897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt45s\" (UniqueName: \"kubernetes.io/projected/e0e37249-5eec-48c8-9366-9a291edd750e-kube-api-access-jt45s\") pod \"root-account-create-update-g4q9p\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.429637 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.616317 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.626948 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/90bae269-30fb-4c0c-8e00-717f68ef2b01-etc-swift\") pod \"swift-storage-0\" (UID: \"90bae269-30fb-4c0c-8e00-717f68ef2b01\") " pod="openstack/swift-storage-0" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.654260 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.677825 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 09:15:05 crc kubenswrapper[4720]: I0202 09:15:05.996803 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g4q9p"] Feb 02 09:15:06 crc kubenswrapper[4720]: W0202 09:15:06.243918 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90bae269_30fb_4c0c_8e00_717f68ef2b01.slice/crio-1d0f2dcd00ab33ffb26feedb8f0502f49820cce4cb00e882aa5bbf54bc606f5f WatchSource:0}: Error finding container 1d0f2dcd00ab33ffb26feedb8f0502f49820cce4cb00e882aa5bbf54bc606f5f: Status 404 returned error can't find the container with id 1d0f2dcd00ab33ffb26feedb8f0502f49820cce4cb00e882aa5bbf54bc606f5f Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.251395 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.323045 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7d25d"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.324588 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.337521 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7d25d"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.430487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dksh\" (UniqueName: \"kubernetes.io/projected/8c3389dc-d691-4727-966f-38f108bd6309-kube-api-access-7dksh\") pod \"keystone-db-create-7d25d\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.430593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c3389dc-d691-4727-966f-38f108bd6309-operator-scripts\") pod \"keystone-db-create-7d25d\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.521969 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1d0d-account-create-update-6xwpp"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.523103 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.525910 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.530435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"1d0f2dcd00ab33ffb26feedb8f0502f49820cce4cb00e882aa5bbf54bc606f5f"} Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.531483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c3389dc-d691-4727-966f-38f108bd6309-operator-scripts\") pod \"keystone-db-create-7d25d\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.531585 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dksh\" (UniqueName: \"kubernetes.io/projected/8c3389dc-d691-4727-966f-38f108bd6309-kube-api-access-7dksh\") pod \"keystone-db-create-7d25d\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.532436 4720 generic.go:334] "Generic (PLEG): container finished" podID="e0e37249-5eec-48c8-9366-9a291edd750e" containerID="3a7bd0c37ff704610c3790b8d0cb21067ff3f371f5ff8d33e71d6528f34be021" exitCode=0 Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.532453 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c3389dc-d691-4727-966f-38f108bd6309-operator-scripts\") pod \"keystone-db-create-7d25d\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.532487 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g4q9p" event={"ID":"e0e37249-5eec-48c8-9366-9a291edd750e","Type":"ContainerDied","Data":"3a7bd0c37ff704610c3790b8d0cb21067ff3f371f5ff8d33e71d6528f34be021"} Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.532508 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g4q9p" event={"ID":"e0e37249-5eec-48c8-9366-9a291edd750e","Type":"ContainerStarted","Data":"fd1d37c5308d40d2424f423e94c49aeded49b2cd79842f71ff73c6bebe4f2320"} Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.540608 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1d0d-account-create-update-6xwpp"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.576812 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dksh\" (UniqueName: \"kubernetes.io/projected/8c3389dc-d691-4727-966f-38f108bd6309-kube-api-access-7dksh\") pod \"keystone-db-create-7d25d\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.612080 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zxj6c"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.613270 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.624842 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zxj6c"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.635772 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8g7\" (UniqueName: \"kubernetes.io/projected/af4aafe2-8ad5-43c7-b929-2c357e58ff01-kube-api-access-kr8g7\") pod \"keystone-1d0d-account-create-update-6xwpp\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.635996 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af4aafe2-8ad5-43c7-b929-2c357e58ff01-operator-scripts\") pod \"keystone-1d0d-account-create-update-6xwpp\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.680519 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.720779 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4ac9-account-create-update-tdwhd"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.721690 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.724818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.737644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqwg\" (UniqueName: \"kubernetes.io/projected/966b44cd-7bad-4ec3-b906-16a88bd144b2-kube-api-access-dxqwg\") pod \"placement-db-create-zxj6c\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.737729 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8g7\" (UniqueName: \"kubernetes.io/projected/af4aafe2-8ad5-43c7-b929-2c357e58ff01-kube-api-access-kr8g7\") pod \"keystone-1d0d-account-create-update-6xwpp\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.737796 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966b44cd-7bad-4ec3-b906-16a88bd144b2-operator-scripts\") pod \"placement-db-create-zxj6c\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.737854 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af4aafe2-8ad5-43c7-b929-2c357e58ff01-operator-scripts\") pod \"keystone-1d0d-account-create-update-6xwpp\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.738806 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af4aafe2-8ad5-43c7-b929-2c357e58ff01-operator-scripts\") pod \"keystone-1d0d-account-create-update-6xwpp\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.738865 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4ac9-account-create-update-tdwhd"] Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.753837 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8g7\" (UniqueName: \"kubernetes.io/projected/af4aafe2-8ad5-43c7-b929-2c357e58ff01-kube-api-access-kr8g7\") pod \"keystone-1d0d-account-create-update-6xwpp\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.838664 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxqwg\" (UniqueName: \"kubernetes.io/projected/966b44cd-7bad-4ec3-b906-16a88bd144b2-kube-api-access-dxqwg\") pod \"placement-db-create-zxj6c\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.838735 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-operator-scripts\") pod \"placement-4ac9-account-create-update-tdwhd\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.838774 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966b44cd-7bad-4ec3-b906-16a88bd144b2-operator-scripts\") pod \"placement-db-create-zxj6c\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.838817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwvs\" (UniqueName: \"kubernetes.io/projected/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-kube-api-access-zhwvs\") pod \"placement-4ac9-account-create-update-tdwhd\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.839409 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966b44cd-7bad-4ec3-b906-16a88bd144b2-operator-scripts\") pod \"placement-db-create-zxj6c\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.849515 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.857844 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxqwg\" (UniqueName: \"kubernetes.io/projected/966b44cd-7bad-4ec3-b906-16a88bd144b2-kube-api-access-dxqwg\") pod \"placement-db-create-zxj6c\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.940119 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-operator-scripts\") pod \"placement-4ac9-account-create-update-tdwhd\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.940203 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhwvs\" (UniqueName: \"kubernetes.io/projected/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-kube-api-access-zhwvs\") pod \"placement-4ac9-account-create-update-tdwhd\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.940521 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.941133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-operator-scripts\") pod \"placement-4ac9-account-create-update-tdwhd\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:06 crc kubenswrapper[4720]: I0202 09:15:06.967089 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhwvs\" (UniqueName: \"kubernetes.io/projected/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-kube-api-access-zhwvs\") pod \"placement-4ac9-account-create-update-tdwhd\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.107051 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-774qf" podUID="57c88c8b-430e-40d7-9598-464d1dbead23" containerName="ovn-controller" probeResult="failure" output=< Feb 02 09:15:07 crc kubenswrapper[4720]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 09:15:07 crc kubenswrapper[4720]: > Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.118625 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.131619 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.150948 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b979n" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.153832 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7d25d"] Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.281957 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1d0d-account-create-update-6xwpp"] Feb 02 09:15:07 crc kubenswrapper[4720]: W0202 09:15:07.297254 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf4aafe2_8ad5_43c7_b929_2c357e58ff01.slice/crio-e125bd0db6b2caf525fea177ec927732ee76efe2a8efd6f1ef60a27cd882bb64 WatchSource:0}: Error finding container e125bd0db6b2caf525fea177ec927732ee76efe2a8efd6f1ef60a27cd882bb64: Status 404 returned error can't find the container with id e125bd0db6b2caf525fea177ec927732ee76efe2a8efd6f1ef60a27cd882bb64 Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.303574 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.362134 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-774qf-config-c8h7q"] Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.363741 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.366601 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.380204 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-774qf-config-c8h7q"] Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.421927 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zxj6c"] Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.460197 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run-ovn\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.460234 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhg9\" (UniqueName: \"kubernetes.io/projected/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-kube-api-access-xhhg9\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.460278 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-additional-scripts\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.460326 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.460354 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-scripts\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.460465 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-log-ovn\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.541196 4720 generic.go:334] "Generic (PLEG): container finished" podID="527ad190-1f46-4b04-8379-72f150ba294d" containerID="3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1" exitCode=0 Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.541249 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"527ad190-1f46-4b04-8379-72f150ba294d","Type":"ContainerDied","Data":"3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1"} Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.551029 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d0d-account-create-update-6xwpp" event={"ID":"af4aafe2-8ad5-43c7-b929-2c357e58ff01","Type":"ContainerStarted","Data":"e125bd0db6b2caf525fea177ec927732ee76efe2a8efd6f1ef60a27cd882bb64"} Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.553520 4720 generic.go:334] "Generic (PLEG): container finished" podID="8c3389dc-d691-4727-966f-38f108bd6309" containerID="e859681590e0432a59ce41f20057db9f956a7378399fbbe6711d2e6bb86d1b7e" exitCode=0 Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.553587 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7d25d" event={"ID":"8c3389dc-d691-4727-966f-38f108bd6309","Type":"ContainerDied","Data":"e859681590e0432a59ce41f20057db9f956a7378399fbbe6711d2e6bb86d1b7e"} Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.553613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7d25d" event={"ID":"8c3389dc-d691-4727-966f-38f108bd6309","Type":"ContainerStarted","Data":"c575e579202fd01eb3921ce136e005bae9b97da4abe70215dbd208a5280e9c96"} Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.556168 4720 generic.go:334] "Generic (PLEG): container finished" podID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerID="bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369" exitCode=0 Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.556243 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cda7a8a-d405-4c4f-b8c2-bf75323634b9","Type":"ContainerDied","Data":"bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369"} Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.557387 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4ac9-account-create-update-tdwhd"] Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run-ovn\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561697 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhg9\" (UniqueName: \"kubernetes.io/projected/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-kube-api-access-xhhg9\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561741 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-additional-scripts\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561791 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-scripts\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-log-ovn\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561969 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-log-ovn\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.561979 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run-ovn\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.562109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.563909 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-additional-scripts\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.564365 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-scripts\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.585924 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhg9\" (UniqueName: \"kubernetes.io/projected/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-kube-api-access-xhhg9\") pod \"ovn-controller-774qf-config-c8h7q\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.699268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:07 crc kubenswrapper[4720]: W0202 09:15:07.723760 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff441e88_a0ed_4b80_80d9_32ee9885ad6a.slice/crio-d659455a055b23ec45f088609fe312ebdbd0d6234acff1459c0ae43594458598 WatchSource:0}: Error finding container d659455a055b23ec45f088609fe312ebdbd0d6234acff1459c0ae43594458598: Status 404 returned error can't find the container with id d659455a055b23ec45f088609fe312ebdbd0d6234acff1459c0ae43594458598 Feb 02 09:15:07 crc kubenswrapper[4720]: W0202 09:15:07.729096 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966b44cd_7bad_4ec3_b906_16a88bd144b2.slice/crio-fa67fb3be86a9a96f0279ba94bb2e23e28d1a00838481f1ff2d5023a49fb64d2 WatchSource:0}: Error finding container fa67fb3be86a9a96f0279ba94bb2e23e28d1a00838481f1ff2d5023a49fb64d2: Status 404 returned error can't find the container with id fa67fb3be86a9a96f0279ba94bb2e23e28d1a00838481f1ff2d5023a49fb64d2 Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.741434 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.858707 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.973844 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt45s\" (UniqueName: \"kubernetes.io/projected/e0e37249-5eec-48c8-9366-9a291edd750e-kube-api-access-jt45s\") pod \"e0e37249-5eec-48c8-9366-9a291edd750e\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.974316 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e37249-5eec-48c8-9366-9a291edd750e-operator-scripts\") pod \"e0e37249-5eec-48c8-9366-9a291edd750e\" (UID: \"e0e37249-5eec-48c8-9366-9a291edd750e\") " Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.975441 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e37249-5eec-48c8-9366-9a291edd750e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0e37249-5eec-48c8-9366-9a291edd750e" (UID: "e0e37249-5eec-48c8-9366-9a291edd750e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:07 crc kubenswrapper[4720]: I0202 09:15:07.983073 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e37249-5eec-48c8-9366-9a291edd750e-kube-api-access-jt45s" (OuterVolumeSpecName: "kube-api-access-jt45s") pod "e0e37249-5eec-48c8-9366-9a291edd750e" (UID: "e0e37249-5eec-48c8-9366-9a291edd750e"). InnerVolumeSpecName "kube-api-access-jt45s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.076909 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt45s\" (UniqueName: \"kubernetes.io/projected/e0e37249-5eec-48c8-9366-9a291edd750e-kube-api-access-jt45s\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.076936 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e37249-5eec-48c8-9366-9a291edd750e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.262670 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-774qf-config-c8h7q"] Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.570170 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g4q9p" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.570230 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g4q9p" event={"ID":"e0e37249-5eec-48c8-9366-9a291edd750e","Type":"ContainerDied","Data":"fd1d37c5308d40d2424f423e94c49aeded49b2cd79842f71ff73c6bebe4f2320"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.571096 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1d37c5308d40d2424f423e94c49aeded49b2cd79842f71ff73c6bebe4f2320" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.587353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-774qf-config-c8h7q" event={"ID":"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5","Type":"ContainerStarted","Data":"2b4ca454b674b48386d024bffbe94b27b88a96c9596b9bc817e7267f7ae58ecf"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.589458 4720 generic.go:334] "Generic (PLEG): container finished" podID="966b44cd-7bad-4ec3-b906-16a88bd144b2" containerID="6c4d5ea96bab908e8b4f5ba7b0b91d4eff32b38adecec74a522676f253d87e31" exitCode=0 Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.589699 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zxj6c" event={"ID":"966b44cd-7bad-4ec3-b906-16a88bd144b2","Type":"ContainerDied","Data":"6c4d5ea96bab908e8b4f5ba7b0b91d4eff32b38adecec74a522676f253d87e31"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.589929 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zxj6c" event={"ID":"966b44cd-7bad-4ec3-b906-16a88bd144b2","Type":"ContainerStarted","Data":"fa67fb3be86a9a96f0279ba94bb2e23e28d1a00838481f1ff2d5023a49fb64d2"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.596606 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cda7a8a-d405-4c4f-b8c2-bf75323634b9","Type":"ContainerStarted","Data":"5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.597471 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.601220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"527ad190-1f46-4b04-8379-72f150ba294d","Type":"ContainerStarted","Data":"fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.603175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"3ae9fae2d92be8e01d728e68fcc1ba196768b38c3e908d4722d065ebd926331b"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.603209 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"e4dddb03b204bceffc0af7bfc6b7157e1492ba5e7d85965571ca19d52d1903eb"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.610276 4720 generic.go:334] "Generic (PLEG): container finished" podID="ff441e88-a0ed-4b80-80d9-32ee9885ad6a" containerID="25bbc41f91b78b006b9bb53474e02a04dcc7ad1a95e71477286dea630a2d527d" exitCode=0 Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.610378 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4ac9-account-create-update-tdwhd" event={"ID":"ff441e88-a0ed-4b80-80d9-32ee9885ad6a","Type":"ContainerDied","Data":"25bbc41f91b78b006b9bb53474e02a04dcc7ad1a95e71477286dea630a2d527d"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.610410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4ac9-account-create-update-tdwhd" event={"ID":"ff441e88-a0ed-4b80-80d9-32ee9885ad6a","Type":"ContainerStarted","Data":"d659455a055b23ec45f088609fe312ebdbd0d6234acff1459c0ae43594458598"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.614046 4720 generic.go:334] "Generic (PLEG): container finished" podID="af4aafe2-8ad5-43c7-b929-2c357e58ff01" containerID="306b86e68c2d10e02819653f56e38eddf0db63c44b06064d9817809e8a604845" exitCode=0 Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.614390 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d0d-account-create-update-6xwpp" event={"ID":"af4aafe2-8ad5-43c7-b929-2c357e58ff01","Type":"ContainerDied","Data":"306b86e68c2d10e02819653f56e38eddf0db63c44b06064d9817809e8a604845"} Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.644042 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.644970337 podStartE2EDuration="1m6.644024227s" podCreationTimestamp="2026-02-02 09:14:02 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.626319171 +0000 UTC m=+1089.481944727" lastFinishedPulling="2026-02-02 09:14:31.625373061 +0000 UTC m=+1105.480998617" observedRunningTime="2026-02-02 09:15:08.637150323 +0000 UTC m=+1142.492775879" watchObservedRunningTime="2026-02-02 09:15:08.644024227 +0000 UTC m=+1142.499649773" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.661124 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.351174172 podStartE2EDuration="1m7.661103419s" podCreationTimestamp="2026-02-02 09:14:01 +0000 UTC" firstStartedPulling="2026-02-02 09:14:15.419614444 +0000 UTC m=+1089.275240000" lastFinishedPulling="2026-02-02 09:14:31.729543691 +0000 UTC m=+1105.585169247" observedRunningTime="2026-02-02 09:15:08.658829219 +0000 UTC m=+1142.514454785" watchObservedRunningTime="2026-02-02 09:15:08.661103419 +0000 UTC m=+1142.516728985" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.870623 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.995498 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dksh\" (UniqueName: \"kubernetes.io/projected/8c3389dc-d691-4727-966f-38f108bd6309-kube-api-access-7dksh\") pod \"8c3389dc-d691-4727-966f-38f108bd6309\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.995852 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c3389dc-d691-4727-966f-38f108bd6309-operator-scripts\") pod \"8c3389dc-d691-4727-966f-38f108bd6309\" (UID: \"8c3389dc-d691-4727-966f-38f108bd6309\") " Feb 02 09:15:08 crc kubenswrapper[4720]: I0202 09:15:08.996814 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3389dc-d691-4727-966f-38f108bd6309-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c3389dc-d691-4727-966f-38f108bd6309" (UID: "8c3389dc-d691-4727-966f-38f108bd6309"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.000530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3389dc-d691-4727-966f-38f108bd6309-kube-api-access-7dksh" (OuterVolumeSpecName: "kube-api-access-7dksh") pod "8c3389dc-d691-4727-966f-38f108bd6309" (UID: "8c3389dc-d691-4727-966f-38f108bd6309"). InnerVolumeSpecName "kube-api-access-7dksh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.098489 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dksh\" (UniqueName: \"kubernetes.io/projected/8c3389dc-d691-4727-966f-38f108bd6309-kube-api-access-7dksh\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.098514 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c3389dc-d691-4727-966f-38f108bd6309-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.622910 4720 generic.go:334] "Generic (PLEG): container finished" podID="f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" containerID="324c2f20e4d24da4e428f6fddac0b8de274ce85319d59f223625777ab49caacd" exitCode=0 Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.622965 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-774qf-config-c8h7q" event={"ID":"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5","Type":"ContainerDied","Data":"324c2f20e4d24da4e428f6fddac0b8de274ce85319d59f223625777ab49caacd"} Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.625769 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"a72dbdb9400efa775062b5c235ca9633db3481c36e11173379d323aa392bffbc"} Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.625806 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"18732cc0a73a4adea06ec20037c3a2e10e827cc1a5a9fcfb87a101257d693411"} Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.627304 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7d25d" Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.628948 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7d25d" event={"ID":"8c3389dc-d691-4727-966f-38f108bd6309","Type":"ContainerDied","Data":"c575e579202fd01eb3921ce136e005bae9b97da4abe70215dbd208a5280e9c96"} Feb 02 09:15:09 crc kubenswrapper[4720]: I0202 09:15:09.628992 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c575e579202fd01eb3921ce136e005bae9b97da4abe70215dbd208a5280e9c96" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.273738 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.277815 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.284384 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.425932 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966b44cd-7bad-4ec3-b906-16a88bd144b2-operator-scripts\") pod \"966b44cd-7bad-4ec3-b906-16a88bd144b2\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426030 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af4aafe2-8ad5-43c7-b929-2c357e58ff01-operator-scripts\") pod \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426090 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-operator-scripts\") pod \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426134 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxqwg\" (UniqueName: \"kubernetes.io/projected/966b44cd-7bad-4ec3-b906-16a88bd144b2-kube-api-access-dxqwg\") pod \"966b44cd-7bad-4ec3-b906-16a88bd144b2\" (UID: \"966b44cd-7bad-4ec3-b906-16a88bd144b2\") " Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426173 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8g7\" (UniqueName: \"kubernetes.io/projected/af4aafe2-8ad5-43c7-b929-2c357e58ff01-kube-api-access-kr8g7\") pod \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\" (UID: \"af4aafe2-8ad5-43c7-b929-2c357e58ff01\") " Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426221 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhwvs\" (UniqueName: \"kubernetes.io/projected/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-kube-api-access-zhwvs\") pod \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\" (UID: \"ff441e88-a0ed-4b80-80d9-32ee9885ad6a\") " Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426633 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff441e88-a0ed-4b80-80d9-32ee9885ad6a" (UID: "ff441e88-a0ed-4b80-80d9-32ee9885ad6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.426712 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af4aafe2-8ad5-43c7-b929-2c357e58ff01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af4aafe2-8ad5-43c7-b929-2c357e58ff01" (UID: "af4aafe2-8ad5-43c7-b929-2c357e58ff01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.427298 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966b44cd-7bad-4ec3-b906-16a88bd144b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "966b44cd-7bad-4ec3-b906-16a88bd144b2" (UID: "966b44cd-7bad-4ec3-b906-16a88bd144b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.433099 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af4aafe2-8ad5-43c7-b929-2c357e58ff01-kube-api-access-kr8g7" (OuterVolumeSpecName: "kube-api-access-kr8g7") pod "af4aafe2-8ad5-43c7-b929-2c357e58ff01" (UID: "af4aafe2-8ad5-43c7-b929-2c357e58ff01"). InnerVolumeSpecName "kube-api-access-kr8g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.433147 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-kube-api-access-zhwvs" (OuterVolumeSpecName: "kube-api-access-zhwvs") pod "ff441e88-a0ed-4b80-80d9-32ee9885ad6a" (UID: "ff441e88-a0ed-4b80-80d9-32ee9885ad6a"). InnerVolumeSpecName "kube-api-access-zhwvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.442950 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966b44cd-7bad-4ec3-b906-16a88bd144b2-kube-api-access-dxqwg" (OuterVolumeSpecName: "kube-api-access-dxqwg") pod "966b44cd-7bad-4ec3-b906-16a88bd144b2" (UID: "966b44cd-7bad-4ec3-b906-16a88bd144b2"). InnerVolumeSpecName "kube-api-access-dxqwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.527681 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxqwg\" (UniqueName: \"kubernetes.io/projected/966b44cd-7bad-4ec3-b906-16a88bd144b2-kube-api-access-dxqwg\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.527877 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8g7\" (UniqueName: \"kubernetes.io/projected/af4aafe2-8ad5-43c7-b929-2c357e58ff01-kube-api-access-kr8g7\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.527903 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhwvs\" (UniqueName: \"kubernetes.io/projected/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-kube-api-access-zhwvs\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.527913 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/966b44cd-7bad-4ec3-b906-16a88bd144b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.527923 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af4aafe2-8ad5-43c7-b929-2c357e58ff01-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.527932 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff441e88-a0ed-4b80-80d9-32ee9885ad6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.635851 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zxj6c" event={"ID":"966b44cd-7bad-4ec3-b906-16a88bd144b2","Type":"ContainerDied","Data":"fa67fb3be86a9a96f0279ba94bb2e23e28d1a00838481f1ff2d5023a49fb64d2"} Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.635964 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa67fb3be86a9a96f0279ba94bb2e23e28d1a00838481f1ff2d5023a49fb64d2" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.636028 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zxj6c" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.637633 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4ac9-account-create-update-tdwhd" event={"ID":"ff441e88-a0ed-4b80-80d9-32ee9885ad6a","Type":"ContainerDied","Data":"d659455a055b23ec45f088609fe312ebdbd0d6234acff1459c0ae43594458598"} Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.637680 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d659455a055b23ec45f088609fe312ebdbd0d6234acff1459c0ae43594458598" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.637699 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ac9-account-create-update-tdwhd" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.638862 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d0d-account-create-update-6xwpp" Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.638910 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d0d-account-create-update-6xwpp" event={"ID":"af4aafe2-8ad5-43c7-b929-2c357e58ff01","Type":"ContainerDied","Data":"e125bd0db6b2caf525fea177ec927732ee76efe2a8efd6f1ef60a27cd882bb64"} Feb 02 09:15:10 crc kubenswrapper[4720]: I0202 09:15:10.638931 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e125bd0db6b2caf525fea177ec927732ee76efe2a8efd6f1ef60a27cd882bb64" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.098703 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.238702 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhg9\" (UniqueName: \"kubernetes.io/projected/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-kube-api-access-xhhg9\") pod \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.238797 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-log-ovn\") pod \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.239552 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-scripts\") pod \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.238867 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" (UID: "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.239632 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run\") pod \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.239772 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run" (OuterVolumeSpecName: "var-run") pod "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" (UID: "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.239864 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run-ovn\") pod \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.239923 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" (UID: "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.240072 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-additional-scripts\") pod \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\" (UID: \"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5\") " Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.240646 4720 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.240667 4720 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.240679 4720 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.240794 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" (UID: "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.241253 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-scripts" (OuterVolumeSpecName: "scripts") pod "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" (UID: "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.243569 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-kube-api-access-xhhg9" (OuterVolumeSpecName: "kube-api-access-xhhg9") pod "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" (UID: "f6b6349d-3a5e-46c7-94a4-d68ed2209fe5"). InnerVolumeSpecName "kube-api-access-xhhg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.342696 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.343074 4720 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.343095 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhg9\" (UniqueName: \"kubernetes.io/projected/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5-kube-api-access-xhhg9\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.648471 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-774qf-config-c8h7q" event={"ID":"f6b6349d-3a5e-46c7-94a4-d68ed2209fe5","Type":"ContainerDied","Data":"2b4ca454b674b48386d024bffbe94b27b88a96c9596b9bc817e7267f7ae58ecf"} Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.648507 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b4ca454b674b48386d024bffbe94b27b88a96c9596b9bc817e7267f7ae58ecf" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.648560 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-774qf-config-c8h7q" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.664039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"eab9d63f9eeb769c4aa2587141bbaf4f04561aa3e1ac2d481c1105053a8d93d9"} Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.664313 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"d1e636bcb20d88bdfce30a1216508f4622d27d5cc8fdb00bed732fc21ba65dc7"} Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.664403 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"6c56b711451ba564e5a7c9926e1d6dce966d6c5acac87e8e7f26452ce42e2b34"} Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.950572 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8mvjv"] Feb 02 09:15:11 crc kubenswrapper[4720]: E0202 09:15:11.950962 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966b44cd-7bad-4ec3-b906-16a88bd144b2" containerName="mariadb-database-create" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.950985 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="966b44cd-7bad-4ec3-b906-16a88bd144b2" containerName="mariadb-database-create" Feb 02 09:15:11 crc kubenswrapper[4720]: E0202 09:15:11.951006 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e37249-5eec-48c8-9366-9a291edd750e" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951016 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e37249-5eec-48c8-9366-9a291edd750e" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: E0202 09:15:11.951026 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3389dc-d691-4727-966f-38f108bd6309" containerName="mariadb-database-create" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951035 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3389dc-d691-4727-966f-38f108bd6309" containerName="mariadb-database-create" Feb 02 09:15:11 crc kubenswrapper[4720]: E0202 09:15:11.951051 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" containerName="ovn-config" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951060 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" containerName="ovn-config" Feb 02 09:15:11 crc kubenswrapper[4720]: E0202 09:15:11.951077 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff441e88-a0ed-4b80-80d9-32ee9885ad6a" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951086 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff441e88-a0ed-4b80-80d9-32ee9885ad6a" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: E0202 09:15:11.951097 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af4aafe2-8ad5-43c7-b929-2c357e58ff01" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951105 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="af4aafe2-8ad5-43c7-b929-2c357e58ff01" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951298 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff441e88-a0ed-4b80-80d9-32ee9885ad6a" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951311 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="af4aafe2-8ad5-43c7-b929-2c357e58ff01" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951326 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e37249-5eec-48c8-9366-9a291edd750e" containerName="mariadb-account-create-update" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951346 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" containerName="ovn-config" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951358 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3389dc-d691-4727-966f-38f108bd6309" containerName="mariadb-database-create" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951371 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="966b44cd-7bad-4ec3-b906-16a88bd144b2" containerName="mariadb-database-create" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.951985 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:11 crc kubenswrapper[4720]: I0202 09:15:11.967819 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8mvjv"] Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.054857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72fn\" (UniqueName: \"kubernetes.io/projected/04d0d455-b8c0-4bc5-9f79-5050021d55bc-kube-api-access-r72fn\") pod \"glance-db-create-8mvjv\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.058280 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d0d455-b8c0-4bc5-9f79-5050021d55bc-operator-scripts\") pod \"glance-db-create-8mvjv\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.058238 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bde2-account-create-update-vjhgn"] Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.059451 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.069383 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.072765 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bde2-account-create-update-vjhgn"] Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.089068 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-774qf" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.184825 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7qg\" (UniqueName: \"kubernetes.io/projected/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-kube-api-access-gn7qg\") pod \"glance-bde2-account-create-update-vjhgn\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.184958 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-operator-scripts\") pod \"glance-bde2-account-create-update-vjhgn\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.184999 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72fn\" (UniqueName: \"kubernetes.io/projected/04d0d455-b8c0-4bc5-9f79-5050021d55bc-kube-api-access-r72fn\") pod \"glance-db-create-8mvjv\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.185031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d0d455-b8c0-4bc5-9f79-5050021d55bc-operator-scripts\") pod \"glance-db-create-8mvjv\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.185719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d0d455-b8c0-4bc5-9f79-5050021d55bc-operator-scripts\") pod \"glance-db-create-8mvjv\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.204943 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72fn\" (UniqueName: \"kubernetes.io/projected/04d0d455-b8c0-4bc5-9f79-5050021d55bc-kube-api-access-r72fn\") pod \"glance-db-create-8mvjv\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.219693 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-774qf-config-c8h7q"] Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.228903 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-774qf-config-c8h7q"] Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.286618 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-operator-scripts\") pod \"glance-bde2-account-create-update-vjhgn\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.286720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7qg\" (UniqueName: \"kubernetes.io/projected/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-kube-api-access-gn7qg\") pod \"glance-bde2-account-create-update-vjhgn\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.287591 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-operator-scripts\") pod \"glance-bde2-account-create-update-vjhgn\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.302478 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7qg\" (UniqueName: \"kubernetes.io/projected/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-kube-api-access-gn7qg\") pod \"glance-bde2-account-create-update-vjhgn\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.311553 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.392818 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.666275 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bde2-account-create-update-vjhgn"] Feb 02 09:15:12 crc kubenswrapper[4720]: W0202 09:15:12.677158 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2813d031_5b81_42b0_82bd_9ef9dc55a7aa.slice/crio-64f44a0c478a172aad59bfc3dd2c09bbb5842f3e8f05bd9ceb9c8a0b1678ac83 WatchSource:0}: Error finding container 64f44a0c478a172aad59bfc3dd2c09bbb5842f3e8f05bd9ceb9c8a0b1678ac83: Status 404 returned error can't find the container with id 64f44a0c478a172aad59bfc3dd2c09bbb5842f3e8f05bd9ceb9c8a0b1678ac83 Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.691995 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"13c47386fb1bdc44ea50998a939bb5636793386ac707a9bb45dd686ba4ae982b"} Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.757572 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8mvjv"] Feb 02 09:15:12 crc kubenswrapper[4720]: W0202 09:15:12.771860 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04d0d455_b8c0_4bc5_9f79_5050021d55bc.slice/crio-d486964a845e644c3be1ddb34ff686052646250be22dc96ba5e55a44683fe44d WatchSource:0}: Error finding container d486964a845e644c3be1ddb34ff686052646250be22dc96ba5e55a44683fe44d: Status 404 returned error can't find the container with id d486964a845e644c3be1ddb34ff686052646250be22dc96ba5e55a44683fe44d Feb 02 09:15:12 crc kubenswrapper[4720]: I0202 09:15:12.896788 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b6349d-3a5e-46c7-94a4-d68ed2209fe5" path="/var/lib/kubelet/pods/f6b6349d-3a5e-46c7-94a4-d68ed2209fe5/volumes" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.404850 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g4q9p"] Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.410912 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g4q9p"] Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.525199 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t585f"] Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.526038 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.526267 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.527927 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.547334 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t585f"] Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.710996 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcrsr\" (UniqueName: \"kubernetes.io/projected/f5a687a1-7597-4207-b881-e2873c4b2f33-kube-api-access-fcrsr\") pod \"root-account-create-update-t585f\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.711491 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a687a1-7597-4207-b881-e2873c4b2f33-operator-scripts\") pod \"root-account-create-update-t585f\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.727652 4720 generic.go:334] "Generic (PLEG): container finished" podID="2813d031-5b81-42b0-82bd-9ef9dc55a7aa" containerID="56d991376cc9e7ecccb9a4da327e8a0dbacf60a09c5c0a199f95a552387b0524" exitCode=0 Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.727785 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bde2-account-create-update-vjhgn" event={"ID":"2813d031-5b81-42b0-82bd-9ef9dc55a7aa","Type":"ContainerDied","Data":"56d991376cc9e7ecccb9a4da327e8a0dbacf60a09c5c0a199f95a552387b0524"} Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.727817 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bde2-account-create-update-vjhgn" event={"ID":"2813d031-5b81-42b0-82bd-9ef9dc55a7aa","Type":"ContainerStarted","Data":"64f44a0c478a172aad59bfc3dd2c09bbb5842f3e8f05bd9ceb9c8a0b1678ac83"} Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.733075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"30fb0457e5f951379fefe8df995c4fd87ad1c31bb1c34bcad8dffa69f7823fa5"} Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.733115 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"1129a42fdc0d99df002743351b7d2dcaad6e8a40b791e6824b2d7838a688b98a"} Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.735618 4720 generic.go:334] "Generic (PLEG): container finished" podID="04d0d455-b8c0-4bc5-9f79-5050021d55bc" containerID="6d25fb036c59c2f014a67479e45ca13133f3db3e68f0efca7721568d4a98d053" exitCode=0 Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.735650 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8mvjv" event={"ID":"04d0d455-b8c0-4bc5-9f79-5050021d55bc","Type":"ContainerDied","Data":"6d25fb036c59c2f014a67479e45ca13133f3db3e68f0efca7721568d4a98d053"} Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.735668 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8mvjv" event={"ID":"04d0d455-b8c0-4bc5-9f79-5050021d55bc","Type":"ContainerStarted","Data":"d486964a845e644c3be1ddb34ff686052646250be22dc96ba5e55a44683fe44d"} Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.812739 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcrsr\" (UniqueName: \"kubernetes.io/projected/f5a687a1-7597-4207-b881-e2873c4b2f33-kube-api-access-fcrsr\") pod \"root-account-create-update-t585f\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.812833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a687a1-7597-4207-b881-e2873c4b2f33-operator-scripts\") pod \"root-account-create-update-t585f\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.813481 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a687a1-7597-4207-b881-e2873c4b2f33-operator-scripts\") pod \"root-account-create-update-t585f\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.832480 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcrsr\" (UniqueName: \"kubernetes.io/projected/f5a687a1-7597-4207-b881-e2873c4b2f33-kube-api-access-fcrsr\") pod \"root-account-create-update-t585f\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " pod="openstack/root-account-create-update-t585f" Feb 02 09:15:13 crc kubenswrapper[4720]: I0202 09:15:13.876257 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t585f" Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.484585 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t585f"] Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.784199 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t585f" event={"ID":"f5a687a1-7597-4207-b881-e2873c4b2f33","Type":"ContainerStarted","Data":"fbf3442a05db1e70751168b74583e6d81ee0ae849f23e44081e26651c55e6746"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.784289 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t585f" event={"ID":"f5a687a1-7597-4207-b881-e2873c4b2f33","Type":"ContainerStarted","Data":"30552472279b8b746951ff25b49ed96546227c4447007bc623a61ce40bf35907"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.808652 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-t585f" podStartSLOduration=1.808630116 podStartE2EDuration="1.808630116s" podCreationTimestamp="2026-02-02 09:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:14.795868091 +0000 UTC m=+1148.651493647" watchObservedRunningTime="2026-02-02 09:15:14.808630116 +0000 UTC m=+1148.664255672" Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.813961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"4315c811bd1a8d743490eb9ea4b1a32c58a3659ae26476440faa4c24c423077c"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.814006 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"bb5f6ec75f37b41ac7c273ec63de652a2bab9792810c803e88bac58f7985d140"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.814021 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"dcaa4340538a040d4badff2d0aa2f6af3165e31e97fcde191a19043a0d917b28"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.814034 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"467882e216b03a12359e2c5be6ae0023e11568c1ca097390b8b220c9a9dc8235"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.814048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"90bae269-30fb-4c0c-8e00-717f68ef2b01","Type":"ContainerStarted","Data":"2b63693e558dd4415065cf456a809c695347d3f31eaf0e44c2bbe9f1fdc07e3e"} Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.868152 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.882363265 podStartE2EDuration="26.868127167s" podCreationTimestamp="2026-02-02 09:14:48 +0000 UTC" firstStartedPulling="2026-02-02 09:15:06.246625457 +0000 UTC m=+1140.102251013" lastFinishedPulling="2026-02-02 09:15:13.232389359 +0000 UTC m=+1147.088014915" observedRunningTime="2026-02-02 09:15:14.863427672 +0000 UTC m=+1148.719053238" watchObservedRunningTime="2026-02-02 09:15:14.868127167 +0000 UTC m=+1148.723752723" Feb 02 09:15:14 crc kubenswrapper[4720]: I0202 09:15:14.897286 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e37249-5eec-48c8-9366-9a291edd750e" path="/var/lib/kubelet/pods/e0e37249-5eec-48c8-9366-9a291edd750e/volumes" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.258945 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-rrrtl"] Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.268531 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.276207 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.283474 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-rrrtl"] Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.320555 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.344854 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.453235 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d0d455-b8c0-4bc5-9f79-5050021d55bc-operator-scripts\") pod \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.454206 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r72fn\" (UniqueName: \"kubernetes.io/projected/04d0d455-b8c0-4bc5-9f79-5050021d55bc-kube-api-access-r72fn\") pod \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\" (UID: \"04d0d455-b8c0-4bc5-9f79-5050021d55bc\") " Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.454137 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d0d455-b8c0-4bc5-9f79-5050021d55bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04d0d455-b8c0-4bc5-9f79-5050021d55bc" (UID: "04d0d455-b8c0-4bc5-9f79-5050021d55bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.455061 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn7qg\" (UniqueName: \"kubernetes.io/projected/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-kube-api-access-gn7qg\") pod \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.455431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-operator-scripts\") pod \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\" (UID: \"2813d031-5b81-42b0-82bd-9ef9dc55a7aa\") " Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.455952 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2813d031-5b81-42b0-82bd-9ef9dc55a7aa" (UID: "2813d031-5b81-42b0-82bd-9ef9dc55a7aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456219 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456298 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f979k\" (UniqueName: \"kubernetes.io/projected/809bb436-ed06-47de-aa07-670cf4f4ef8e-kube-api-access-f979k\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456327 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456391 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-config\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456555 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456709 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456822 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d0d455-b8c0-4bc5-9f79-5050021d55bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.456837 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.464067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-kube-api-access-gn7qg" (OuterVolumeSpecName: "kube-api-access-gn7qg") pod "2813d031-5b81-42b0-82bd-9ef9dc55a7aa" (UID: "2813d031-5b81-42b0-82bd-9ef9dc55a7aa"). InnerVolumeSpecName "kube-api-access-gn7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.464120 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d0d455-b8c0-4bc5-9f79-5050021d55bc-kube-api-access-r72fn" (OuterVolumeSpecName: "kube-api-access-r72fn") pod "04d0d455-b8c0-4bc5-9f79-5050021d55bc" (UID: "04d0d455-b8c0-4bc5-9f79-5050021d55bc"). InnerVolumeSpecName "kube-api-access-r72fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.558343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.558818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.558853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.558905 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f979k\" (UniqueName: \"kubernetes.io/projected/809bb436-ed06-47de-aa07-670cf4f4ef8e-kube-api-access-f979k\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.558933 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.558979 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-config\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.559026 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r72fn\" (UniqueName: \"kubernetes.io/projected/04d0d455-b8c0-4bc5-9f79-5050021d55bc-kube-api-access-r72fn\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.559038 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn7qg\" (UniqueName: \"kubernetes.io/projected/2813d031-5b81-42b0-82bd-9ef9dc55a7aa-kube-api-access-gn7qg\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.559864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-config\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.559968 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.560287 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.560534 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.560752 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.577223 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f979k\" (UniqueName: \"kubernetes.io/projected/809bb436-ed06-47de-aa07-670cf4f4ef8e-kube-api-access-f979k\") pod \"dnsmasq-dns-77585f5f8c-rrrtl\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.643331 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.828271 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8mvjv" event={"ID":"04d0d455-b8c0-4bc5-9f79-5050021d55bc","Type":"ContainerDied","Data":"d486964a845e644c3be1ddb34ff686052646250be22dc96ba5e55a44683fe44d"} Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.828320 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d486964a845e644c3be1ddb34ff686052646250be22dc96ba5e55a44683fe44d" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.828338 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8mvjv" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.830554 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bde2-account-create-update-vjhgn" event={"ID":"2813d031-5b81-42b0-82bd-9ef9dc55a7aa","Type":"ContainerDied","Data":"64f44a0c478a172aad59bfc3dd2c09bbb5842f3e8f05bd9ceb9c8a0b1678ac83"} Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.830588 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f44a0c478a172aad59bfc3dd2c09bbb5842f3e8f05bd9ceb9c8a0b1678ac83" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.830590 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bde2-account-create-update-vjhgn" Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.844725 4720 generic.go:334] "Generic (PLEG): container finished" podID="f5a687a1-7597-4207-b881-e2873c4b2f33" containerID="fbf3442a05db1e70751168b74583e6d81ee0ae849f23e44081e26651c55e6746" exitCode=0 Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.844935 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t585f" event={"ID":"f5a687a1-7597-4207-b881-e2873c4b2f33","Type":"ContainerDied","Data":"fbf3442a05db1e70751168b74583e6d81ee0ae849f23e44081e26651c55e6746"} Feb 02 09:15:15 crc kubenswrapper[4720]: I0202 09:15:15.945068 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-rrrtl"] Feb 02 09:15:16 crc kubenswrapper[4720]: I0202 09:15:16.859963 4720 generic.go:334] "Generic (PLEG): container finished" podID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerID="7f8c40d76efd71f7ea248aa58cef978ceede578f2d9c1339892efc562f7303be" exitCode=0 Feb 02 09:15:16 crc kubenswrapper[4720]: I0202 09:15:16.860058 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" event={"ID":"809bb436-ed06-47de-aa07-670cf4f4ef8e","Type":"ContainerDied","Data":"7f8c40d76efd71f7ea248aa58cef978ceede578f2d9c1339892efc562f7303be"} Feb 02 09:15:16 crc kubenswrapper[4720]: I0202 09:15:16.860393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" event={"ID":"809bb436-ed06-47de-aa07-670cf4f4ef8e","Type":"ContainerStarted","Data":"187fba6a246082bbeb0878aa5d0515881b5a902e042687b6d668c77ac3cdffb8"} Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.234995 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dcm6l"] Feb 02 09:15:17 crc kubenswrapper[4720]: E0202 09:15:17.235523 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2813d031-5b81-42b0-82bd-9ef9dc55a7aa" containerName="mariadb-account-create-update" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.235540 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2813d031-5b81-42b0-82bd-9ef9dc55a7aa" containerName="mariadb-account-create-update" Feb 02 09:15:17 crc kubenswrapper[4720]: E0202 09:15:17.235559 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d0d455-b8c0-4bc5-9f79-5050021d55bc" containerName="mariadb-database-create" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.235565 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d0d455-b8c0-4bc5-9f79-5050021d55bc" containerName="mariadb-database-create" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.235704 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d0d455-b8c0-4bc5-9f79-5050021d55bc" containerName="mariadb-database-create" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.235722 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2813d031-5b81-42b0-82bd-9ef9dc55a7aa" containerName="mariadb-account-create-update" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.236180 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.244412 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.245742 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dmxj" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.256794 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dcm6l"] Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.308091 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t585f" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.397950 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a687a1-7597-4207-b881-e2873c4b2f33-operator-scripts\") pod \"f5a687a1-7597-4207-b881-e2873c4b2f33\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.397991 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcrsr\" (UniqueName: \"kubernetes.io/projected/f5a687a1-7597-4207-b881-e2873c4b2f33-kube-api-access-fcrsr\") pod \"f5a687a1-7597-4207-b881-e2873c4b2f33\" (UID: \"f5a687a1-7597-4207-b881-e2873c4b2f33\") " Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.398318 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpd9k\" (UniqueName: \"kubernetes.io/projected/bd226b95-5b7d-4a56-a605-e63267494899-kube-api-access-zpd9k\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.398361 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-db-sync-config-data\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.398422 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-config-data\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.398730 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-combined-ca-bundle\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.399402 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a687a1-7597-4207-b881-e2873c4b2f33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5a687a1-7597-4207-b881-e2873c4b2f33" (UID: "f5a687a1-7597-4207-b881-e2873c4b2f33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.401910 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a687a1-7597-4207-b881-e2873c4b2f33-kube-api-access-fcrsr" (OuterVolumeSpecName: "kube-api-access-fcrsr") pod "f5a687a1-7597-4207-b881-e2873c4b2f33" (UID: "f5a687a1-7597-4207-b881-e2873c4b2f33"). InnerVolumeSpecName "kube-api-access-fcrsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.500488 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-combined-ca-bundle\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.500564 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpd9k\" (UniqueName: \"kubernetes.io/projected/bd226b95-5b7d-4a56-a605-e63267494899-kube-api-access-zpd9k\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.500592 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-db-sync-config-data\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.500645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-config-data\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.500707 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a687a1-7597-4207-b881-e2873c4b2f33-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.500721 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcrsr\" (UniqueName: \"kubernetes.io/projected/f5a687a1-7597-4207-b881-e2873c4b2f33-kube-api-access-fcrsr\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.504038 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-db-sync-config-data\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.510259 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-combined-ca-bundle\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.516967 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-config-data\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.519514 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpd9k\" (UniqueName: \"kubernetes.io/projected/bd226b95-5b7d-4a56-a605-e63267494899-kube-api-access-zpd9k\") pod \"glance-db-sync-dcm6l\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.560313 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.872999 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t585f" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.873085 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t585f" event={"ID":"f5a687a1-7597-4207-b881-e2873c4b2f33","Type":"ContainerDied","Data":"30552472279b8b746951ff25b49ed96546227c4447007bc623a61ce40bf35907"} Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.873394 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30552472279b8b746951ff25b49ed96546227c4447007bc623a61ce40bf35907" Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.875273 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" event={"ID":"809bb436-ed06-47de-aa07-670cf4f4ef8e","Type":"ContainerStarted","Data":"447230fc15143f275a6a6aeff27501d7c7982f69bf7a4dbac26da477afe8e8db"} Feb 02 09:15:17 crc kubenswrapper[4720]: I0202 09:15:17.875786 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:18 crc kubenswrapper[4720]: I0202 09:15:18.053125 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" podStartSLOduration=3.053097129 podStartE2EDuration="3.053097129s" podCreationTimestamp="2026-02-02 09:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:17.907119083 +0000 UTC m=+1151.762744639" watchObservedRunningTime="2026-02-02 09:15:18.053097129 +0000 UTC m=+1151.908722725" Feb 02 09:15:18 crc kubenswrapper[4720]: W0202 09:15:18.055130 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd226b95_5b7d_4a56_a605_e63267494899.slice/crio-dc97bd0dd21a188b6b24890e8d3510f489ec7498ad73d16748bf8a88f6839176 WatchSource:0}: Error finding container dc97bd0dd21a188b6b24890e8d3510f489ec7498ad73d16748bf8a88f6839176: Status 404 returned error can't find the container with id dc97bd0dd21a188b6b24890e8d3510f489ec7498ad73d16748bf8a88f6839176 Feb 02 09:15:18 crc kubenswrapper[4720]: I0202 09:15:18.057602 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dcm6l"] Feb 02 09:15:18 crc kubenswrapper[4720]: I0202 09:15:18.884685 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dcm6l" event={"ID":"bd226b95-5b7d-4a56-a605-e63267494899","Type":"ContainerStarted","Data":"dc97bd0dd21a188b6b24890e8d3510f489ec7498ad73d16748bf8a88f6839176"} Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.211105 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.528181 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.595626 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-dgs6k"] Feb 02 09:15:23 crc kubenswrapper[4720]: E0202 09:15:23.596166 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a687a1-7597-4207-b881-e2873c4b2f33" containerName="mariadb-account-create-update" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.596189 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a687a1-7597-4207-b881-e2873c4b2f33" containerName="mariadb-account-create-update" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.596393 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a687a1-7597-4207-b881-e2873c4b2f33" containerName="mariadb-account-create-update" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.597728 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.617377 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-dgs6k"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.628009 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ffb2-account-create-update-8kngw"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.629608 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.633905 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.651120 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ffb2-account-create-update-8kngw"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.699941 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-m26nr"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.701192 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.713706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtnkg\" (UniqueName: \"kubernetes.io/projected/3444a48e-b0df-47ec-b6d8-a43708d1f84a-kube-api-access-jtnkg\") pod \"barbican-ffb2-account-create-update-8kngw\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.713792 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3444a48e-b0df-47ec-b6d8-a43708d1f84a-operator-scripts\") pod \"barbican-ffb2-account-create-update-8kngw\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.713814 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nggr\" (UniqueName: \"kubernetes.io/projected/12555194-f017-4145-a0cf-8f9369bdaa76-kube-api-access-9nggr\") pod \"manila-db-create-dgs6k\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.713842 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12555194-f017-4145-a0cf-8f9369bdaa76-operator-scripts\") pod \"manila-db-create-dgs6k\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.727872 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m26nr"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.798609 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-1169-account-create-update-qsh8m"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.816401 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56sp\" (UniqueName: \"kubernetes.io/projected/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-kube-api-access-b56sp\") pod \"barbican-db-create-m26nr\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.816481 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3444a48e-b0df-47ec-b6d8-a43708d1f84a-operator-scripts\") pod \"barbican-ffb2-account-create-update-8kngw\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.816510 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nggr\" (UniqueName: \"kubernetes.io/projected/12555194-f017-4145-a0cf-8f9369bdaa76-kube-api-access-9nggr\") pod \"manila-db-create-dgs6k\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.816544 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-operator-scripts\") pod \"barbican-db-create-m26nr\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.816582 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12555194-f017-4145-a0cf-8f9369bdaa76-operator-scripts\") pod \"manila-db-create-dgs6k\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.816650 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtnkg\" (UniqueName: \"kubernetes.io/projected/3444a48e-b0df-47ec-b6d8-a43708d1f84a-kube-api-access-jtnkg\") pod \"barbican-ffb2-account-create-update-8kngw\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.817459 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3444a48e-b0df-47ec-b6d8-a43708d1f84a-operator-scripts\") pod \"barbican-ffb2-account-create-update-8kngw\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.818226 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12555194-f017-4145-a0cf-8f9369bdaa76-operator-scripts\") pod \"manila-db-create-dgs6k\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.826781 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-1169-account-create-update-qsh8m"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.826985 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.832333 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.843069 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtnkg\" (UniqueName: \"kubernetes.io/projected/3444a48e-b0df-47ec-b6d8-a43708d1f84a-kube-api-access-jtnkg\") pod \"barbican-ffb2-account-create-update-8kngw\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.848827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nggr\" (UniqueName: \"kubernetes.io/projected/12555194-f017-4145-a0cf-8f9369bdaa76-kube-api-access-9nggr\") pod \"manila-db-create-dgs6k\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.859318 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6b58q"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.882210 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.893178 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6b58q"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.909507 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-cn7l9"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.910818 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.915696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.915784 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.916211 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.916420 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jsbgm" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.917986 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18e396c-f47c-4be7-8ca8-c5ff31393401-operator-scripts\") pod \"manila-1169-account-create-update-qsh8m\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.918117 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfbvw\" (UniqueName: \"kubernetes.io/projected/e18e396c-f47c-4be7-8ca8-c5ff31393401-kube-api-access-hfbvw\") pod \"manila-1169-account-create-update-qsh8m\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.918240 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56sp\" (UniqueName: \"kubernetes.io/projected/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-kube-api-access-b56sp\") pod \"barbican-db-create-m26nr\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.918330 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-operator-scripts\") pod \"barbican-db-create-m26nr\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.919869 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-operator-scripts\") pod \"barbican-db-create-m26nr\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.922686 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cn7l9"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.923071 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.951103 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56sp\" (UniqueName: \"kubernetes.io/projected/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-kube-api-access-b56sp\") pod \"barbican-db-create-m26nr\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.951953 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.980395 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fbxwd"] Feb 02 09:15:23 crc kubenswrapper[4720]: I0202 09:15:23.981748 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.014778 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8d04-account-create-update-mh57v"] Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.016067 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.018178 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.021828 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.022480 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fff2609-d43b-4174-bdc2-cdab850baf7e-operator-scripts\") pod \"cinder-db-create-6b58q\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.022631 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbbs\" (UniqueName: \"kubernetes.io/projected/0bce2adf-98dc-4eb6-90e3-c2956976b371-kube-api-access-fdbbs\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.022801 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-config-data\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.022946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18e396c-f47c-4be7-8ca8-c5ff31393401-operator-scripts\") pod \"manila-1169-account-create-update-qsh8m\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.023693 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-combined-ca-bundle\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.024330 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfbvw\" (UniqueName: \"kubernetes.io/projected/e18e396c-f47c-4be7-8ca8-c5ff31393401-kube-api-access-hfbvw\") pod \"manila-1169-account-create-update-qsh8m\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.025810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79pp\" (UniqueName: \"kubernetes.io/projected/3fff2609-d43b-4174-bdc2-cdab850baf7e-kube-api-access-p79pp\") pod \"cinder-db-create-6b58q\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.023656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18e396c-f47c-4be7-8ca8-c5ff31393401-operator-scripts\") pod \"manila-1169-account-create-update-qsh8m\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.033910 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fbxwd"] Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.043252 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfbvw\" (UniqueName: \"kubernetes.io/projected/e18e396c-f47c-4be7-8ca8-c5ff31393401-kube-api-access-hfbvw\") pod \"manila-1169-account-create-update-qsh8m\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.049715 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8d04-account-create-update-mh57v"] Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.127457 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fff2609-d43b-4174-bdc2-cdab850baf7e-operator-scripts\") pod \"cinder-db-create-6b58q\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.127950 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbbs\" (UniqueName: \"kubernetes.io/projected/0bce2adf-98dc-4eb6-90e3-c2956976b371-kube-api-access-fdbbs\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.128129 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfxw\" (UniqueName: \"kubernetes.io/projected/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-kube-api-access-gjfxw\") pod \"cinder-8d04-account-create-update-mh57v\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.128324 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-config-data\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.129311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-combined-ca-bundle\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.129575 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2dl\" (UniqueName: \"kubernetes.io/projected/3417648f-9a90-4897-87ab-0131b5906201-kube-api-access-5h2dl\") pod \"neutron-db-create-fbxwd\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.128671 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fff2609-d43b-4174-bdc2-cdab850baf7e-operator-scripts\") pod \"cinder-db-create-6b58q\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.129931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3417648f-9a90-4897-87ab-0131b5906201-operator-scripts\") pod \"neutron-db-create-fbxwd\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.130106 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79pp\" (UniqueName: \"kubernetes.io/projected/3fff2609-d43b-4174-bdc2-cdab850baf7e-kube-api-access-p79pp\") pod \"cinder-db-create-6b58q\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.130327 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-operator-scripts\") pod \"cinder-8d04-account-create-update-mh57v\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.132716 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-config-data\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.137373 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-combined-ca-bundle\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.159997 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79pp\" (UniqueName: \"kubernetes.io/projected/3fff2609-d43b-4174-bdc2-cdab850baf7e-kube-api-access-p79pp\") pod \"cinder-db-create-6b58q\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.161498 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbbs\" (UniqueName: \"kubernetes.io/projected/0bce2adf-98dc-4eb6-90e3-c2956976b371-kube-api-access-fdbbs\") pod \"keystone-db-sync-cn7l9\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.217288 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.231125 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.231570 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjfxw\" (UniqueName: \"kubernetes.io/projected/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-kube-api-access-gjfxw\") pod \"cinder-8d04-account-create-update-mh57v\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.231635 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2dl\" (UniqueName: \"kubernetes.io/projected/3417648f-9a90-4897-87ab-0131b5906201-kube-api-access-5h2dl\") pod \"neutron-db-create-fbxwd\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.231688 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3417648f-9a90-4897-87ab-0131b5906201-operator-scripts\") pod \"neutron-db-create-fbxwd\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.231710 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-operator-scripts\") pod \"cinder-8d04-account-create-update-mh57v\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.232485 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-operator-scripts\") pod \"cinder-8d04-account-create-update-mh57v\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.233202 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3417648f-9a90-4897-87ab-0131b5906201-operator-scripts\") pod \"neutron-db-create-fbxwd\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.241998 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.248202 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjfxw\" (UniqueName: \"kubernetes.io/projected/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-kube-api-access-gjfxw\") pod \"cinder-8d04-account-create-update-mh57v\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.250982 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2dl\" (UniqueName: \"kubernetes.io/projected/3417648f-9a90-4897-87ab-0131b5906201-kube-api-access-5h2dl\") pod \"neutron-db-create-fbxwd\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.286581 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cc5a-account-create-update-vsv48"] Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.287535 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.289278 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.300068 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.308283 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc5a-account-create-update-vsv48"] Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.330705 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.434776 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrdl\" (UniqueName: \"kubernetes.io/projected/383c4f2b-6f59-45a2-a121-f1e94f555a96-kube-api-access-ptrdl\") pod \"neutron-cc5a-account-create-update-vsv48\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.434832 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383c4f2b-6f59-45a2-a121-f1e94f555a96-operator-scripts\") pod \"neutron-cc5a-account-create-update-vsv48\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.536283 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrdl\" (UniqueName: \"kubernetes.io/projected/383c4f2b-6f59-45a2-a121-f1e94f555a96-kube-api-access-ptrdl\") pod \"neutron-cc5a-account-create-update-vsv48\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.536373 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383c4f2b-6f59-45a2-a121-f1e94f555a96-operator-scripts\") pod \"neutron-cc5a-account-create-update-vsv48\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.537141 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383c4f2b-6f59-45a2-a121-f1e94f555a96-operator-scripts\") pod \"neutron-cc5a-account-create-update-vsv48\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.553581 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrdl\" (UniqueName: \"kubernetes.io/projected/383c4f2b-6f59-45a2-a121-f1e94f555a96-kube-api-access-ptrdl\") pod \"neutron-cc5a-account-create-update-vsv48\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:24 crc kubenswrapper[4720]: I0202 09:15:24.615266 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:25 crc kubenswrapper[4720]: I0202 09:15:25.645202 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:15:25 crc kubenswrapper[4720]: I0202 09:15:25.701256 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fcsps"] Feb 02 09:15:25 crc kubenswrapper[4720]: I0202 09:15:25.701488 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-fcsps" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="dnsmasq-dns" containerID="cri-o://c54986bc443adbc25acefdad6156df95b0b0e3a23dd2b068bb83761d5827089e" gracePeriod=10 Feb 02 09:15:25 crc kubenswrapper[4720]: I0202 09:15:25.961564 4720 generic.go:334] "Generic (PLEG): container finished" podID="8547591d-9191-4d26-83e2-17cbc78ec126" containerID="c54986bc443adbc25acefdad6156df95b0b0e3a23dd2b068bb83761d5827089e" exitCode=0 Feb 02 09:15:25 crc kubenswrapper[4720]: I0202 09:15:25.961612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fcsps" event={"ID":"8547591d-9191-4d26-83e2-17cbc78ec126","Type":"ContainerDied","Data":"c54986bc443adbc25acefdad6156df95b0b0e3a23dd2b068bb83761d5827089e"} Feb 02 09:15:28 crc kubenswrapper[4720]: I0202 09:15:28.831575 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-fcsps" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.011111 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fcsps" event={"ID":"8547591d-9191-4d26-83e2-17cbc78ec126","Type":"ContainerDied","Data":"7670ae41c486d430d8b6b38ee386e38d5c14f27ca2f6b9a6e8a59110a4b31ac2"} Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.011621 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7670ae41c486d430d8b6b38ee386e38d5c14f27ca2f6b9a6e8a59110a4b31ac2" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.027052 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.182617 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gnp\" (UniqueName: \"kubernetes.io/projected/8547591d-9191-4d26-83e2-17cbc78ec126-kube-api-access-69gnp\") pod \"8547591d-9191-4d26-83e2-17cbc78ec126\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.182957 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-sb\") pod \"8547591d-9191-4d26-83e2-17cbc78ec126\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.182978 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-nb\") pod \"8547591d-9191-4d26-83e2-17cbc78ec126\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.183016 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-config\") pod \"8547591d-9191-4d26-83e2-17cbc78ec126\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.183071 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-dns-svc\") pod \"8547591d-9191-4d26-83e2-17cbc78ec126\" (UID: \"8547591d-9191-4d26-83e2-17cbc78ec126\") " Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.192324 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8547591d-9191-4d26-83e2-17cbc78ec126-kube-api-access-69gnp" (OuterVolumeSpecName: "kube-api-access-69gnp") pod "8547591d-9191-4d26-83e2-17cbc78ec126" (UID: "8547591d-9191-4d26-83e2-17cbc78ec126"). InnerVolumeSpecName "kube-api-access-69gnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.234958 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-config" (OuterVolumeSpecName: "config") pod "8547591d-9191-4d26-83e2-17cbc78ec126" (UID: "8547591d-9191-4d26-83e2-17cbc78ec126"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.240376 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-1169-account-create-update-qsh8m"] Feb 02 09:15:31 crc kubenswrapper[4720]: W0202 09:15:31.244702 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode18e396c_f47c_4be7_8ca8_c5ff31393401.slice/crio-e19f88d2fdba939d476faeb31319e2e45b70c2fe1119179fdf8d0dcebd70e1a6 WatchSource:0}: Error finding container e19f88d2fdba939d476faeb31319e2e45b70c2fe1119179fdf8d0dcebd70e1a6: Status 404 returned error can't find the container with id e19f88d2fdba939d476faeb31319e2e45b70c2fe1119179fdf8d0dcebd70e1a6 Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.244709 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8547591d-9191-4d26-83e2-17cbc78ec126" (UID: "8547591d-9191-4d26-83e2-17cbc78ec126"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.260913 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8547591d-9191-4d26-83e2-17cbc78ec126" (UID: "8547591d-9191-4d26-83e2-17cbc78ec126"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.278722 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8547591d-9191-4d26-83e2-17cbc78ec126" (UID: "8547591d-9191-4d26-83e2-17cbc78ec126"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.285636 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.285668 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.285698 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69gnp\" (UniqueName: \"kubernetes.io/projected/8547591d-9191-4d26-83e2-17cbc78ec126-kube-api-access-69gnp\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.285712 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.285720 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8547591d-9191-4d26-83e2-17cbc78ec126-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.417987 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc5a-account-create-update-vsv48"] Feb 02 09:15:31 crc kubenswrapper[4720]: W0202 09:15:31.427067 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod383c4f2b_6f59_45a2_a121_f1e94f555a96.slice/crio-4c22df86dbf3ed1f8da2497d9e842b96d59f1c58af188fcfc3b161a4227d58e0 WatchSource:0}: Error finding container 4c22df86dbf3ed1f8da2497d9e842b96d59f1c58af188fcfc3b161a4227d58e0: Status 404 returned error can't find the container with id 4c22df86dbf3ed1f8da2497d9e842b96d59f1c58af188fcfc3b161a4227d58e0 Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.429573 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8d04-account-create-update-mh57v"] Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.438811 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cn7l9"] Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.622817 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m26nr"] Feb 02 09:15:31 crc kubenswrapper[4720]: W0202 09:15:31.625112 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8658d1c_5f58_4e0a_af31_7e87b7843e8e.slice/crio-c33409cc5a3ef9b92fe9f04e2b844e175480bb16c0c4b9c3d593b896b7a67e4c WatchSource:0}: Error finding container c33409cc5a3ef9b92fe9f04e2b844e175480bb16c0c4b9c3d593b896b7a67e4c: Status 404 returned error can't find the container with id c33409cc5a3ef9b92fe9f04e2b844e175480bb16c0c4b9c3d593b896b7a67e4c Feb 02 09:15:31 crc kubenswrapper[4720]: W0202 09:15:31.627533 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12555194_f017_4145_a0cf_8f9369bdaa76.slice/crio-8f9ff0ef233f5c9b92fa509c25922332d63acf1e408258494bf7fd704658adc5 WatchSource:0}: Error finding container 8f9ff0ef233f5c9b92fa509c25922332d63acf1e408258494bf7fd704658adc5: Status 404 returned error can't find the container with id 8f9ff0ef233f5c9b92fa509c25922332d63acf1e408258494bf7fd704658adc5 Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.630154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6b58q"] Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.638493 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-dgs6k"] Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.744419 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ffb2-account-create-update-8kngw"] Feb 02 09:15:31 crc kubenswrapper[4720]: I0202 09:15:31.758184 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fbxwd"] Feb 02 09:15:31 crc kubenswrapper[4720]: W0202 09:15:31.771708 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3444a48e_b0df_47ec_b6d8_a43708d1f84a.slice/crio-36791a280e97c08a962bb6f6f2b468d9811562f2e65fa3be12b92d248b7c966c WatchSource:0}: Error finding container 36791a280e97c08a962bb6f6f2b468d9811562f2e65fa3be12b92d248b7c966c: Status 404 returned error can't find the container with id 36791a280e97c08a962bb6f6f2b468d9811562f2e65fa3be12b92d248b7c966c Feb 02 09:15:31 crc kubenswrapper[4720]: W0202 09:15:31.784151 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3417648f_9a90_4897_87ab_0131b5906201.slice/crio-87affdeb4bf48039c912f4ef94f1508fade5d4f8a1804ba39e89eafad4ea7bc4 WatchSource:0}: Error finding container 87affdeb4bf48039c912f4ef94f1508fade5d4f8a1804ba39e89eafad4ea7bc4: Status 404 returned error can't find the container with id 87affdeb4bf48039c912f4ef94f1508fade5d4f8a1804ba39e89eafad4ea7bc4 Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.021952 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fbxwd" event={"ID":"3417648f-9a90-4897-87ab-0131b5906201","Type":"ContainerStarted","Data":"87affdeb4bf48039c912f4ef94f1508fade5d4f8a1804ba39e89eafad4ea7bc4"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.025199 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cn7l9" event={"ID":"0bce2adf-98dc-4eb6-90e3-c2956976b371","Type":"ContainerStarted","Data":"02ab0461e56b6386fa81066702e3336857b11a3d8f42a84215b8b6e1292bab43"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.028264 4720 generic.go:334] "Generic (PLEG): container finished" podID="f2450cc2-ff6b-4827-a81c-3dc7a69854b0" containerID="1982e55fe5154815513d133addf473bef686630793bf9a2f0f6734cf04e8d56c" exitCode=0 Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.028385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d04-account-create-update-mh57v" event={"ID":"f2450cc2-ff6b-4827-a81c-3dc7a69854b0","Type":"ContainerDied","Data":"1982e55fe5154815513d133addf473bef686630793bf9a2f0f6734cf04e8d56c"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.028423 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d04-account-create-update-mh57v" event={"ID":"f2450cc2-ff6b-4827-a81c-3dc7a69854b0","Type":"ContainerStarted","Data":"2d08106fc0cd56f633bf74fb4df15fad6af3f3f62d3b3aba12735a7367012543"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.031580 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b58q" event={"ID":"3fff2609-d43b-4174-bdc2-cdab850baf7e","Type":"ContainerStarted","Data":"d1b1351a4dacd4541d5a52b2a4714233d461ced116d34cc09800cd368dae1c1e"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.034510 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-dgs6k" event={"ID":"12555194-f017-4145-a0cf-8f9369bdaa76","Type":"ContainerStarted","Data":"8f9ff0ef233f5c9b92fa509c25922332d63acf1e408258494bf7fd704658adc5"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.037011 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dcm6l" event={"ID":"bd226b95-5b7d-4a56-a605-e63267494899","Type":"ContainerStarted","Data":"2170f92bffcf28092d3ec6dd9e584f0254423cd7c1eb77c02d73b9575a0eefc9"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.040601 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m26nr" event={"ID":"c8658d1c-5f58-4e0a-af31-7e87b7843e8e","Type":"ContainerStarted","Data":"c33409cc5a3ef9b92fe9f04e2b844e175480bb16c0c4b9c3d593b896b7a67e4c"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.046077 4720 generic.go:334] "Generic (PLEG): container finished" podID="383c4f2b-6f59-45a2-a121-f1e94f555a96" containerID="df0178473bd7bec51568932c723d57e1002628583d84c81def4d9b858140a0fe" exitCode=0 Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.046255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc5a-account-create-update-vsv48" event={"ID":"383c4f2b-6f59-45a2-a121-f1e94f555a96","Type":"ContainerDied","Data":"df0178473bd7bec51568932c723d57e1002628583d84c81def4d9b858140a0fe"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.046293 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc5a-account-create-update-vsv48" event={"ID":"383c4f2b-6f59-45a2-a121-f1e94f555a96","Type":"ContainerStarted","Data":"4c22df86dbf3ed1f8da2497d9e842b96d59f1c58af188fcfc3b161a4227d58e0"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.048149 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb2-account-create-update-8kngw" event={"ID":"3444a48e-b0df-47ec-b6d8-a43708d1f84a","Type":"ContainerStarted","Data":"36791a280e97c08a962bb6f6f2b468d9811562f2e65fa3be12b92d248b7c966c"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.053966 4720 generic.go:334] "Generic (PLEG): container finished" podID="e18e396c-f47c-4be7-8ca8-c5ff31393401" containerID="961f1649a2e34151d13896f96740f2da013274c18538a28f50d00e89e9ca604c" exitCode=0 Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.054065 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fcsps" Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.056641 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1169-account-create-update-qsh8m" event={"ID":"e18e396c-f47c-4be7-8ca8-c5ff31393401","Type":"ContainerDied","Data":"961f1649a2e34151d13896f96740f2da013274c18538a28f50d00e89e9ca604c"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.056722 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1169-account-create-update-qsh8m" event={"ID":"e18e396c-f47c-4be7-8ca8-c5ff31393401","Type":"ContainerStarted","Data":"e19f88d2fdba939d476faeb31319e2e45b70c2fe1119179fdf8d0dcebd70e1a6"} Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.077547 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dcm6l" podStartSLOduration=2.176727 podStartE2EDuration="15.077523147s" podCreationTimestamp="2026-02-02 09:15:17 +0000 UTC" firstStartedPulling="2026-02-02 09:15:18.057560949 +0000 UTC m=+1151.913186495" lastFinishedPulling="2026-02-02 09:15:30.958357086 +0000 UTC m=+1164.813982642" observedRunningTime="2026-02-02 09:15:32.068017484 +0000 UTC m=+1165.923643060" watchObservedRunningTime="2026-02-02 09:15:32.077523147 +0000 UTC m=+1165.933148703" Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.160246 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fcsps"] Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.168871 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fcsps"] Feb 02 09:15:32 crc kubenswrapper[4720]: I0202 09:15:32.901184 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" path="/var/lib/kubelet/pods/8547591d-9191-4d26-83e2-17cbc78ec126/volumes" Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.064180 4720 generic.go:334] "Generic (PLEG): container finished" podID="c8658d1c-5f58-4e0a-af31-7e87b7843e8e" containerID="8665e8c1444eb7ed2e71f0bd7d0f8387ab49bc92254adbcfaed206b7e62a2637" exitCode=0 Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.064262 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m26nr" event={"ID":"c8658d1c-5f58-4e0a-af31-7e87b7843e8e","Type":"ContainerDied","Data":"8665e8c1444eb7ed2e71f0bd7d0f8387ab49bc92254adbcfaed206b7e62a2637"} Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.066476 4720 generic.go:334] "Generic (PLEG): container finished" podID="3fff2609-d43b-4174-bdc2-cdab850baf7e" containerID="a7af5c92d927bacfaa0c3f588c54784a78dd452735b0432ad4a43d042501daf7" exitCode=0 Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.067394 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b58q" event={"ID":"3fff2609-d43b-4174-bdc2-cdab850baf7e","Type":"ContainerDied","Data":"a7af5c92d927bacfaa0c3f588c54784a78dd452735b0432ad4a43d042501daf7"} Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.072691 4720 generic.go:334] "Generic (PLEG): container finished" podID="3444a48e-b0df-47ec-b6d8-a43708d1f84a" containerID="7308ba5de3fa9560e5bff6fbf79475063851b5d283205da6e4ffa0ca38c0f4c6" exitCode=0 Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.072748 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb2-account-create-update-8kngw" event={"ID":"3444a48e-b0df-47ec-b6d8-a43708d1f84a","Type":"ContainerDied","Data":"7308ba5de3fa9560e5bff6fbf79475063851b5d283205da6e4ffa0ca38c0f4c6"} Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.074968 4720 generic.go:334] "Generic (PLEG): container finished" podID="12555194-f017-4145-a0cf-8f9369bdaa76" containerID="0d0105bb311a8924dec62239afedbb1f56df6f4e899adeb0c02530d7be02a382" exitCode=0 Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.075054 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-dgs6k" event={"ID":"12555194-f017-4145-a0cf-8f9369bdaa76","Type":"ContainerDied","Data":"0d0105bb311a8924dec62239afedbb1f56df6f4e899adeb0c02530d7be02a382"} Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.077743 4720 generic.go:334] "Generic (PLEG): container finished" podID="3417648f-9a90-4897-87ab-0131b5906201" containerID="7d80b9b8c2cddd6e834c7769c404fac652bb0b1340175f23953c46b703fe771a" exitCode=0 Feb 02 09:15:33 crc kubenswrapper[4720]: I0202 09:15:33.077903 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fbxwd" event={"ID":"3417648f-9a90-4897-87ab-0131b5906201","Type":"ContainerDied","Data":"7d80b9b8c2cddd6e834c7769c404fac652bb0b1340175f23953c46b703fe771a"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.039894 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.069486 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.077129 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.085754 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.095041 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.115222 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb2-account-create-update-8kngw" event={"ID":"3444a48e-b0df-47ec-b6d8-a43708d1f84a","Type":"ContainerDied","Data":"36791a280e97c08a962bb6f6f2b468d9811562f2e65fa3be12b92d248b7c966c"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.115258 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb2-account-create-update-8kngw" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.115270 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36791a280e97c08a962bb6f6f2b468d9811562f2e65fa3be12b92d248b7c966c" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.123498 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.123498 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-1169-account-create-update-qsh8m" event={"ID":"e18e396c-f47c-4be7-8ca8-c5ff31393401","Type":"ContainerDied","Data":"e19f88d2fdba939d476faeb31319e2e45b70c2fe1119179fdf8d0dcebd70e1a6"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.123534 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e19f88d2fdba939d476faeb31319e2e45b70c2fe1119179fdf8d0dcebd70e1a6" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.123634 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-1169-account-create-update-qsh8m" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.126637 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-dgs6k" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.126645 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-dgs6k" event={"ID":"12555194-f017-4145-a0cf-8f9369bdaa76","Type":"ContainerDied","Data":"8f9ff0ef233f5c9b92fa509c25922332d63acf1e408258494bf7fd704658adc5"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.126996 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f9ff0ef233f5c9b92fa509c25922332d63acf1e408258494bf7fd704658adc5" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.127462 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.134011 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fbxwd" event={"ID":"3417648f-9a90-4897-87ab-0131b5906201","Type":"ContainerDied","Data":"87affdeb4bf48039c912f4ef94f1508fade5d4f8a1804ba39e89eafad4ea7bc4"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.134048 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87affdeb4bf48039c912f4ef94f1508fade5d4f8a1804ba39e89eafad4ea7bc4" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.135506 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.136375 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d04-account-create-update-mh57v" event={"ID":"f2450cc2-ff6b-4827-a81c-3dc7a69854b0","Type":"ContainerDied","Data":"2d08106fc0cd56f633bf74fb4df15fad6af3f3f62d3b3aba12735a7367012543"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.136404 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d08106fc0cd56f633bf74fb4df15fad6af3f3f62d3b3aba12735a7367012543" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.136448 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d04-account-create-update-mh57v" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.140598 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m26nr" event={"ID":"c8658d1c-5f58-4e0a-af31-7e87b7843e8e","Type":"ContainerDied","Data":"c33409cc5a3ef9b92fe9f04e2b844e175480bb16c0c4b9c3d593b896b7a67e4c"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.140631 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33409cc5a3ef9b92fe9f04e2b844e175480bb16c0c4b9c3d593b896b7a67e4c" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.140680 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m26nr" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.142988 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc5a-account-create-update-vsv48" event={"ID":"383c4f2b-6f59-45a2-a121-f1e94f555a96","Type":"ContainerDied","Data":"4c22df86dbf3ed1f8da2497d9e842b96d59f1c58af188fcfc3b161a4227d58e0"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.143008 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc5a-account-create-update-vsv48" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.143023 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c22df86dbf3ed1f8da2497d9e842b96d59f1c58af188fcfc3b161a4227d58e0" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.145494 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b58q" event={"ID":"3fff2609-d43b-4174-bdc2-cdab850baf7e","Type":"ContainerDied","Data":"d1b1351a4dacd4541d5a52b2a4714233d461ced116d34cc09800cd368dae1c1e"} Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.145523 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b1351a4dacd4541d5a52b2a4714233d461ced116d34cc09800cd368dae1c1e" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.145558 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b58q" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219391 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18e396c-f47c-4be7-8ca8-c5ff31393401-operator-scripts\") pod \"e18e396c-f47c-4be7-8ca8-c5ff31393401\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219482 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjfxw\" (UniqueName: \"kubernetes.io/projected/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-kube-api-access-gjfxw\") pod \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219543 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfbvw\" (UniqueName: \"kubernetes.io/projected/e18e396c-f47c-4be7-8ca8-c5ff31393401-kube-api-access-hfbvw\") pod \"e18e396c-f47c-4be7-8ca8-c5ff31393401\" (UID: \"e18e396c-f47c-4be7-8ca8-c5ff31393401\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3417648f-9a90-4897-87ab-0131b5906201-operator-scripts\") pod \"3417648f-9a90-4897-87ab-0131b5906201\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219660 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrdl\" (UniqueName: \"kubernetes.io/projected/383c4f2b-6f59-45a2-a121-f1e94f555a96-kube-api-access-ptrdl\") pod \"383c4f2b-6f59-45a2-a121-f1e94f555a96\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219713 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtnkg\" (UniqueName: \"kubernetes.io/projected/3444a48e-b0df-47ec-b6d8-a43708d1f84a-kube-api-access-jtnkg\") pod \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219745 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h2dl\" (UniqueName: \"kubernetes.io/projected/3417648f-9a90-4897-87ab-0131b5906201-kube-api-access-5h2dl\") pod \"3417648f-9a90-4897-87ab-0131b5906201\" (UID: \"3417648f-9a90-4897-87ab-0131b5906201\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219823 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-operator-scripts\") pod \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\" (UID: \"f2450cc2-ff6b-4827-a81c-3dc7a69854b0\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.219943 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p79pp\" (UniqueName: \"kubernetes.io/projected/3fff2609-d43b-4174-bdc2-cdab850baf7e-kube-api-access-p79pp\") pod \"3fff2609-d43b-4174-bdc2-cdab850baf7e\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220003 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3444a48e-b0df-47ec-b6d8-a43708d1f84a-operator-scripts\") pod \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\" (UID: \"3444a48e-b0df-47ec-b6d8-a43708d1f84a\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220041 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nggr\" (UniqueName: \"kubernetes.io/projected/12555194-f017-4145-a0cf-8f9369bdaa76-kube-api-access-9nggr\") pod \"12555194-f017-4145-a0cf-8f9369bdaa76\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220498 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383c4f2b-6f59-45a2-a121-f1e94f555a96-operator-scripts\") pod \"383c4f2b-6f59-45a2-a121-f1e94f555a96\" (UID: \"383c4f2b-6f59-45a2-a121-f1e94f555a96\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fff2609-d43b-4174-bdc2-cdab850baf7e-operator-scripts\") pod \"3fff2609-d43b-4174-bdc2-cdab850baf7e\" (UID: \"3fff2609-d43b-4174-bdc2-cdab850baf7e\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220593 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12555194-f017-4145-a0cf-8f9369bdaa76-operator-scripts\") pod \"12555194-f017-4145-a0cf-8f9369bdaa76\" (UID: \"12555194-f017-4145-a0cf-8f9369bdaa76\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220328 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e18e396c-f47c-4be7-8ca8-c5ff31393401-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e18e396c-f47c-4be7-8ca8-c5ff31393401" (UID: "e18e396c-f47c-4be7-8ca8-c5ff31393401"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444a48e-b0df-47ec-b6d8-a43708d1f84a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3444a48e-b0df-47ec-b6d8-a43708d1f84a" (UID: "3444a48e-b0df-47ec-b6d8-a43708d1f84a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220736 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2450cc2-ff6b-4827-a81c-3dc7a69854b0" (UID: "f2450cc2-ff6b-4827-a81c-3dc7a69854b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.220998 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383c4f2b-6f59-45a2-a121-f1e94f555a96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "383c4f2b-6f59-45a2-a121-f1e94f555a96" (UID: "383c4f2b-6f59-45a2-a121-f1e94f555a96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221004 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3417648f-9a90-4897-87ab-0131b5906201-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3417648f-9a90-4897-87ab-0131b5906201" (UID: "3417648f-9a90-4897-87ab-0131b5906201"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221181 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fff2609-d43b-4174-bdc2-cdab850baf7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fff2609-d43b-4174-bdc2-cdab850baf7e" (UID: "3fff2609-d43b-4174-bdc2-cdab850baf7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221446 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12555194-f017-4145-a0cf-8f9369bdaa76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12555194-f017-4145-a0cf-8f9369bdaa76" (UID: "12555194-f017-4145-a0cf-8f9369bdaa76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221475 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3444a48e-b0df-47ec-b6d8-a43708d1f84a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221498 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383c4f2b-6f59-45a2-a121-f1e94f555a96-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221511 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fff2609-d43b-4174-bdc2-cdab850baf7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221523 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e18e396c-f47c-4be7-8ca8-c5ff31393401-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221536 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3417648f-9a90-4897-87ab-0131b5906201-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.221547 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.224923 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18e396c-f47c-4be7-8ca8-c5ff31393401-kube-api-access-hfbvw" (OuterVolumeSpecName: "kube-api-access-hfbvw") pod "e18e396c-f47c-4be7-8ca8-c5ff31393401" (UID: "e18e396c-f47c-4be7-8ca8-c5ff31393401"). InnerVolumeSpecName "kube-api-access-hfbvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.225256 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3444a48e-b0df-47ec-b6d8-a43708d1f84a-kube-api-access-jtnkg" (OuterVolumeSpecName: "kube-api-access-jtnkg") pod "3444a48e-b0df-47ec-b6d8-a43708d1f84a" (UID: "3444a48e-b0df-47ec-b6d8-a43708d1f84a"). InnerVolumeSpecName "kube-api-access-jtnkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.226326 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-kube-api-access-gjfxw" (OuterVolumeSpecName: "kube-api-access-gjfxw") pod "f2450cc2-ff6b-4827-a81c-3dc7a69854b0" (UID: "f2450cc2-ff6b-4827-a81c-3dc7a69854b0"). InnerVolumeSpecName "kube-api-access-gjfxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.226467 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383c4f2b-6f59-45a2-a121-f1e94f555a96-kube-api-access-ptrdl" (OuterVolumeSpecName: "kube-api-access-ptrdl") pod "383c4f2b-6f59-45a2-a121-f1e94f555a96" (UID: "383c4f2b-6f59-45a2-a121-f1e94f555a96"). InnerVolumeSpecName "kube-api-access-ptrdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.231246 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fff2609-d43b-4174-bdc2-cdab850baf7e-kube-api-access-p79pp" (OuterVolumeSpecName: "kube-api-access-p79pp") pod "3fff2609-d43b-4174-bdc2-cdab850baf7e" (UID: "3fff2609-d43b-4174-bdc2-cdab850baf7e"). InnerVolumeSpecName "kube-api-access-p79pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.234479 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12555194-f017-4145-a0cf-8f9369bdaa76-kube-api-access-9nggr" (OuterVolumeSpecName: "kube-api-access-9nggr") pod "12555194-f017-4145-a0cf-8f9369bdaa76" (UID: "12555194-f017-4145-a0cf-8f9369bdaa76"). InnerVolumeSpecName "kube-api-access-9nggr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.235100 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3417648f-9a90-4897-87ab-0131b5906201-kube-api-access-5h2dl" (OuterVolumeSpecName: "kube-api-access-5h2dl") pod "3417648f-9a90-4897-87ab-0131b5906201" (UID: "3417648f-9a90-4897-87ab-0131b5906201"). InnerVolumeSpecName "kube-api-access-5h2dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.323024 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56sp\" (UniqueName: \"kubernetes.io/projected/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-kube-api-access-b56sp\") pod \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.323706 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-operator-scripts\") pod \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\" (UID: \"c8658d1c-5f58-4e0a-af31-7e87b7843e8e\") " Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.324150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8658d1c-5f58-4e0a-af31-7e87b7843e8e" (UID: "c8658d1c-5f58-4e0a-af31-7e87b7843e8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.324600 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12555194-f017-4145-a0cf-8f9369bdaa76-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.324682 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjfxw\" (UniqueName: \"kubernetes.io/projected/f2450cc2-ff6b-4827-a81c-3dc7a69854b0-kube-api-access-gjfxw\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.324753 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfbvw\" (UniqueName: \"kubernetes.io/projected/e18e396c-f47c-4be7-8ca8-c5ff31393401-kube-api-access-hfbvw\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.324822 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrdl\" (UniqueName: \"kubernetes.io/projected/383c4f2b-6f59-45a2-a121-f1e94f555a96-kube-api-access-ptrdl\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.324945 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtnkg\" (UniqueName: \"kubernetes.io/projected/3444a48e-b0df-47ec-b6d8-a43708d1f84a-kube-api-access-jtnkg\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.325022 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h2dl\" (UniqueName: \"kubernetes.io/projected/3417648f-9a90-4897-87ab-0131b5906201-kube-api-access-5h2dl\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.325090 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.325156 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p79pp\" (UniqueName: \"kubernetes.io/projected/3fff2609-d43b-4174-bdc2-cdab850baf7e-kube-api-access-p79pp\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.325218 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nggr\" (UniqueName: \"kubernetes.io/projected/12555194-f017-4145-a0cf-8f9369bdaa76-kube-api-access-9nggr\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.326707 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-kube-api-access-b56sp" (OuterVolumeSpecName: "kube-api-access-b56sp") pod "c8658d1c-5f58-4e0a-af31-7e87b7843e8e" (UID: "c8658d1c-5f58-4e0a-af31-7e87b7843e8e"). InnerVolumeSpecName "kube-api-access-b56sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:36 crc kubenswrapper[4720]: I0202 09:15:36.426534 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56sp\" (UniqueName: \"kubernetes.io/projected/c8658d1c-5f58-4e0a-af31-7e87b7843e8e-kube-api-access-b56sp\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:37 crc kubenswrapper[4720]: I0202 09:15:37.155520 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fbxwd" Feb 02 09:15:37 crc kubenswrapper[4720]: I0202 09:15:37.156551 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cn7l9" event={"ID":"0bce2adf-98dc-4eb6-90e3-c2956976b371","Type":"ContainerStarted","Data":"f9740ccb223a5ad43718db224ed2c1a04ff244269b3ee07e62f33ed7da41deb8"} Feb 02 09:15:37 crc kubenswrapper[4720]: I0202 09:15:37.185016 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-cn7l9" podStartSLOduration=9.697493168 podStartE2EDuration="14.184994563s" podCreationTimestamp="2026-02-02 09:15:23 +0000 UTC" firstStartedPulling="2026-02-02 09:15:31.431486362 +0000 UTC m=+1165.287111918" lastFinishedPulling="2026-02-02 09:15:35.918987727 +0000 UTC m=+1169.774613313" observedRunningTime="2026-02-02 09:15:37.176731248 +0000 UTC m=+1171.032356804" watchObservedRunningTime="2026-02-02 09:15:37.184994563 +0000 UTC m=+1171.040620119" Feb 02 09:15:38 crc kubenswrapper[4720]: I0202 09:15:38.170490 4720 generic.go:334] "Generic (PLEG): container finished" podID="bd226b95-5b7d-4a56-a605-e63267494899" containerID="2170f92bffcf28092d3ec6dd9e584f0254423cd7c1eb77c02d73b9575a0eefc9" exitCode=0 Feb 02 09:15:38 crc kubenswrapper[4720]: I0202 09:15:38.170584 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dcm6l" event={"ID":"bd226b95-5b7d-4a56-a605-e63267494899","Type":"ContainerDied","Data":"2170f92bffcf28092d3ec6dd9e584f0254423cd7c1eb77c02d73b9575a0eefc9"} Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.181931 4720 generic.go:334] "Generic (PLEG): container finished" podID="0bce2adf-98dc-4eb6-90e3-c2956976b371" containerID="f9740ccb223a5ad43718db224ed2c1a04ff244269b3ee07e62f33ed7da41deb8" exitCode=0 Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.182073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cn7l9" event={"ID":"0bce2adf-98dc-4eb6-90e3-c2956976b371","Type":"ContainerDied","Data":"f9740ccb223a5ad43718db224ed2c1a04ff244269b3ee07e62f33ed7da41deb8"} Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.688053 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.777418 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-combined-ca-bundle\") pod \"bd226b95-5b7d-4a56-a605-e63267494899\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.777556 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpd9k\" (UniqueName: \"kubernetes.io/projected/bd226b95-5b7d-4a56-a605-e63267494899-kube-api-access-zpd9k\") pod \"bd226b95-5b7d-4a56-a605-e63267494899\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.777589 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-db-sync-config-data\") pod \"bd226b95-5b7d-4a56-a605-e63267494899\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.777686 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-config-data\") pod \"bd226b95-5b7d-4a56-a605-e63267494899\" (UID: \"bd226b95-5b7d-4a56-a605-e63267494899\") " Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.793158 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bd226b95-5b7d-4a56-a605-e63267494899" (UID: "bd226b95-5b7d-4a56-a605-e63267494899"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.793173 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd226b95-5b7d-4a56-a605-e63267494899-kube-api-access-zpd9k" (OuterVolumeSpecName: "kube-api-access-zpd9k") pod "bd226b95-5b7d-4a56-a605-e63267494899" (UID: "bd226b95-5b7d-4a56-a605-e63267494899"). InnerVolumeSpecName "kube-api-access-zpd9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.803708 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd226b95-5b7d-4a56-a605-e63267494899" (UID: "bd226b95-5b7d-4a56-a605-e63267494899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.827649 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-config-data" (OuterVolumeSpecName: "config-data") pod "bd226b95-5b7d-4a56-a605-e63267494899" (UID: "bd226b95-5b7d-4a56-a605-e63267494899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.879388 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpd9k\" (UniqueName: \"kubernetes.io/projected/bd226b95-5b7d-4a56-a605-e63267494899-kube-api-access-zpd9k\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.879435 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.879447 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:39 crc kubenswrapper[4720]: I0202 09:15:39.879461 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd226b95-5b7d-4a56-a605-e63267494899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.196664 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dcm6l" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.199430 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dcm6l" event={"ID":"bd226b95-5b7d-4a56-a605-e63267494899","Type":"ContainerDied","Data":"dc97bd0dd21a188b6b24890e8d3510f489ec7498ad73d16748bf8a88f6839176"} Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.199517 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc97bd0dd21a188b6b24890e8d3510f489ec7498ad73d16748bf8a88f6839176" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.540210 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.610427 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kl2zq"] Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.611755 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fff2609-d43b-4174-bdc2-cdab850baf7e" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.611769 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fff2609-d43b-4174-bdc2-cdab850baf7e" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.611781 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3444a48e-b0df-47ec-b6d8-a43708d1f84a" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.611791 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3444a48e-b0df-47ec-b6d8-a43708d1f84a" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.611807 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18e396c-f47c-4be7-8ca8-c5ff31393401" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.611813 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18e396c-f47c-4be7-8ca8-c5ff31393401" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.611842 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3417648f-9a90-4897-87ab-0131b5906201" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612034 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3417648f-9a90-4897-87ab-0131b5906201" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612048 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd226b95-5b7d-4a56-a605-e63267494899" containerName="glance-db-sync" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612055 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd226b95-5b7d-4a56-a605-e63267494899" containerName="glance-db-sync" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612072 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12555194-f017-4145-a0cf-8f9369bdaa76" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612080 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="12555194-f017-4145-a0cf-8f9369bdaa76" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612093 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8658d1c-5f58-4e0a-af31-7e87b7843e8e" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612099 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8658d1c-5f58-4e0a-af31-7e87b7843e8e" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612112 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bce2adf-98dc-4eb6-90e3-c2956976b371" containerName="keystone-db-sync" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612120 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bce2adf-98dc-4eb6-90e3-c2956976b371" containerName="keystone-db-sync" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612128 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="dnsmasq-dns" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612134 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="dnsmasq-dns" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612146 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="init" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.612152 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="init" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.612957 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383c4f2b-6f59-45a2-a121-f1e94f555a96" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.620262 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="383c4f2b-6f59-45a2-a121-f1e94f555a96" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: E0202 09:15:40.620322 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2450cc2-ff6b-4827-a81c-3dc7a69854b0" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.620331 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2450cc2-ff6b-4827-a81c-3dc7a69854b0" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.620989 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2450cc2-ff6b-4827-a81c-3dc7a69854b0" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621010 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd226b95-5b7d-4a56-a605-e63267494899" containerName="glance-db-sync" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621028 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="383c4f2b-6f59-45a2-a121-f1e94f555a96" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621043 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bce2adf-98dc-4eb6-90e3-c2956976b371" containerName="keystone-db-sync" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621063 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3417648f-9a90-4897-87ab-0131b5906201" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621073 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3444a48e-b0df-47ec-b6d8-a43708d1f84a" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621081 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8658d1c-5f58-4e0a-af31-7e87b7843e8e" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621099 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fff2609-d43b-4174-bdc2-cdab850baf7e" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621112 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8547591d-9191-4d26-83e2-17cbc78ec126" containerName="dnsmasq-dns" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621120 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18e396c-f47c-4be7-8ca8-c5ff31393401" containerName="mariadb-account-create-update" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.621138 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="12555194-f017-4145-a0cf-8f9369bdaa76" containerName="mariadb-database-create" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.624377 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.632172 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kl2zq"] Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.701355 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbbs\" (UniqueName: \"kubernetes.io/projected/0bce2adf-98dc-4eb6-90e3-c2956976b371-kube-api-access-fdbbs\") pod \"0bce2adf-98dc-4eb6-90e3-c2956976b371\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.701410 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-combined-ca-bundle\") pod \"0bce2adf-98dc-4eb6-90e3-c2956976b371\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.701458 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-config-data\") pod \"0bce2adf-98dc-4eb6-90e3-c2956976b371\" (UID: \"0bce2adf-98dc-4eb6-90e3-c2956976b371\") " Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.701806 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/f64de676-f91d-4ea9-8271-35fee04008ff-kube-api-access-p7r5t\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.701878 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-config\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.702075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.702164 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.702220 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.702277 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.723692 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bce2adf-98dc-4eb6-90e3-c2956976b371-kube-api-access-fdbbs" (OuterVolumeSpecName: "kube-api-access-fdbbs") pod "0bce2adf-98dc-4eb6-90e3-c2956976b371" (UID: "0bce2adf-98dc-4eb6-90e3-c2956976b371"). InnerVolumeSpecName "kube-api-access-fdbbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.802401 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-config-data" (OuterVolumeSpecName: "config-data") pod "0bce2adf-98dc-4eb6-90e3-c2956976b371" (UID: "0bce2adf-98dc-4eb6-90e3-c2956976b371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.808794 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.808857 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.808907 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.808963 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/f64de676-f91d-4ea9-8271-35fee04008ff-kube-api-access-p7r5t\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.809036 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-config\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.809079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.809144 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.809165 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbbs\" (UniqueName: \"kubernetes.io/projected/0bce2adf-98dc-4eb6-90e3-c2956976b371-kube-api-access-fdbbs\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.810054 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.810548 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.810944 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-config\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.811261 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.811546 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.831495 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bce2adf-98dc-4eb6-90e3-c2956976b371" (UID: "0bce2adf-98dc-4eb6-90e3-c2956976b371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.840686 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/f64de676-f91d-4ea9-8271-35fee04008ff-kube-api-access-p7r5t\") pod \"dnsmasq-dns-7ff5475cc9-kl2zq\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.911000 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bce2adf-98dc-4eb6-90e3-c2956976b371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:40 crc kubenswrapper[4720]: I0202 09:15:40.957333 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.207037 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cn7l9" event={"ID":"0bce2adf-98dc-4eb6-90e3-c2956976b371","Type":"ContainerDied","Data":"02ab0461e56b6386fa81066702e3336857b11a3d8f42a84215b8b6e1292bab43"} Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.207076 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ab0461e56b6386fa81066702e3336857b11a3d8f42a84215b8b6e1292bab43" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.207137 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cn7l9" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.445789 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kl2zq"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.457384 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kl2zq"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.475432 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d22dh"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.477590 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.482080 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.482598 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.482779 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.483032 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.483787 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jsbgm" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.488459 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d22dh"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.534689 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-57pls"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.535953 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.582115 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-57pls"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-credential-keys\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636783 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636825 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9pmg\" (UniqueName: \"kubernetes.io/projected/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-kube-api-access-g9pmg\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636850 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-scripts\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636869 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636911 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-fernet-keys\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.636960 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-combined-ca-bundle\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.637067 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcnbr\" (UniqueName: \"kubernetes.io/projected/f980ced9-f817-4da9-8b73-ec172d5cb8b7-kube-api-access-dcnbr\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.637095 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-config\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.637121 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.637140 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-config-data\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.666981 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-795577799f-bc22j"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.668318 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.679301 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.679356 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.679541 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-dfsst" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.681963 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.712088 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-795577799f-bc22j"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739006 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcnbr\" (UniqueName: \"kubernetes.io/projected/f980ced9-f817-4da9-8b73-ec172d5cb8b7-kube-api-access-dcnbr\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-config\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739090 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739112 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-config-data\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-credential-keys\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739198 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739245 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9pmg\" (UniqueName: \"kubernetes.io/projected/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-kube-api-access-g9pmg\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739267 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-scripts\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739291 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739323 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739937 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-fernet-keys\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.739977 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-combined-ca-bundle\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.741214 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.741291 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.742124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-config\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.742680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.749454 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.752245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-config-data\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.757836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-fernet-keys\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.760929 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-scripts\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.761098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-credential-keys\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.761539 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-combined-ca-bundle\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.779371 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.781191 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.792318 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.792554 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.809946 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcnbr\" (UniqueName: \"kubernetes.io/projected/f980ced9-f817-4da9-8b73-ec172d5cb8b7-kube-api-access-dcnbr\") pod \"dnsmasq-dns-5c5cc7c5ff-57pls\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.810017 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6zn5f"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.811140 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.819748 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.819921 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bbcg9" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.820088 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.840336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9pmg\" (UniqueName: \"kubernetes.io/projected/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-kube-api-access-g9pmg\") pod \"keystone-bootstrap-d22dh\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.840859 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl9tk\" (UniqueName: \"kubernetes.io/projected/068d02fb-f08d-4ac1-b120-c62f96ff520d-kube-api-access-rl9tk\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.840921 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-config-data\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.840965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068d02fb-f08d-4ac1-b120-c62f96ff520d-logs\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.840980 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-scripts\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.841027 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/068d02fb-f08d-4ac1-b120-c62f96ff520d-horizon-secret-key\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.852755 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.875633 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6zn5f"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.881789 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mcqm2"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.882929 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.885417 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.885607 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ptvgr" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.885725 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.906572 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vn2mf"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.907755 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.912482 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.912729 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n77xk" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.913905 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.921593 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-jspmg"] Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.923071 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.926594 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-27w2w" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.927322 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.946316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-config\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959082 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f877s\" (UniqueName: \"kubernetes.io/projected/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-kube-api-access-f877s\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068d02fb-f08d-4ac1-b120-c62f96ff520d-logs\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959203 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-scripts\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959263 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-run-httpd\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5rc\" (UniqueName: \"kubernetes.io/projected/06a1ffe6-f27c-4751-9872-186b2010e2f0-kube-api-access-vd5rc\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959371 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-log-httpd\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959422 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/068d02fb-f08d-4ac1-b120-c62f96ff520d-horizon-secret-key\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-scripts\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959547 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-config-data\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl9tk\" (UniqueName: \"kubernetes.io/projected/068d02fb-f08d-4ac1-b120-c62f96ff520d-kube-api-access-rl9tk\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959697 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-config-data\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.959716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-combined-ca-bundle\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.960402 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068d02fb-f08d-4ac1-b120-c62f96ff520d-logs\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.960919 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-scripts\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.966947 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-config-data\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:41 crc kubenswrapper[4720]: I0202 09:15:41.991772 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl9tk\" (UniqueName: \"kubernetes.io/projected/068d02fb-f08d-4ac1-b120-c62f96ff520d-kube-api-access-rl9tk\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.030647 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vn2mf"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.044817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/068d02fb-f08d-4ac1-b120-c62f96ff520d-horizon-secret-key\") pod \"horizon-795577799f-bc22j\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.053699 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mcqm2"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061230 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-scripts\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061277 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-config-data\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061301 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-config-data\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061328 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-job-config-data\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061346 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9rg\" (UniqueName: \"kubernetes.io/projected/1a624e5d-098a-44e1-95b7-fa398979891a-kube-api-access-hm9rg\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061409 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-combined-ca-bundle\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-combined-ca-bundle\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061443 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqnn\" (UniqueName: \"kubernetes.io/projected/a1890e68-1a9c-4180-b989-6e178510e23b-kube-api-access-zpqnn\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061456 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-db-sync-config-data\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061475 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-config\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061491 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f877s\" (UniqueName: \"kubernetes.io/projected/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-kube-api-access-f877s\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061509 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-combined-ca-bundle\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061572 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-combined-ca-bundle\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-run-httpd\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061628 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5rc\" (UniqueName: \"kubernetes.io/projected/06a1ffe6-f27c-4751-9872-186b2010e2f0-kube-api-access-vd5rc\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061650 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-config-data\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-log-httpd\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061690 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-db-sync-config-data\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061710 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdcv\" (UniqueName: \"kubernetes.io/projected/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-kube-api-access-6gdcv\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061746 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-etc-machine-id\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.061766 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-scripts\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.062638 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-run-httpd\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.063571 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-log-httpd\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.072932 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-config\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.074075 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-57pls"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.076969 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.077601 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.078085 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-scripts\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.083875 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-config-data\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.091905 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-combined-ca-bundle\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.095768 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5rc\" (UniqueName: \"kubernetes.io/projected/06a1ffe6-f27c-4751-9872-186b2010e2f0-kube-api-access-vd5rc\") pod \"ceilometer-0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.110014 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f877s\" (UniqueName: \"kubernetes.io/projected/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-kube-api-access-f877s\") pod \"neutron-db-sync-6zn5f\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.138506 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.145445 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jspmg"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.154504 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j9h6k"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.155765 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.160516 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.160665 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.160948 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cc7fb" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.163962 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-combined-ca-bundle\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164041 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-config-data\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-db-sync-config-data\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164110 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdcv\" (UniqueName: \"kubernetes.io/projected/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-kube-api-access-6gdcv\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164148 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-etc-machine-id\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164170 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-scripts\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164215 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-config-data\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9rg\" (UniqueName: \"kubernetes.io/projected/1a624e5d-098a-44e1-95b7-fa398979891a-kube-api-access-hm9rg\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164270 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-job-config-data\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164326 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-combined-ca-bundle\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164351 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-db-sync-config-data\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164373 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqnn\" (UniqueName: \"kubernetes.io/projected/a1890e68-1a9c-4180-b989-6e178510e23b-kube-api-access-zpqnn\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164411 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-combined-ca-bundle\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.164533 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9h6k"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.170981 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-scripts\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.179324 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-combined-ca-bundle\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.181675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-etc-machine-id\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.182605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795577799f-bc22j" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.182752 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58bf7c5879-fzhzx"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.183536 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-combined-ca-bundle\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.184734 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-config-data\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.185004 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.194692 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-db-sync-config-data\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.195475 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-db-sync-config-data\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.199650 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-4zcd4"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.200121 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-combined-ca-bundle\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.200524 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-config-data\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.203224 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9rg\" (UniqueName: \"kubernetes.io/projected/1a624e5d-098a-44e1-95b7-fa398979891a-kube-api-access-hm9rg\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.203485 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-job-config-data\") pod \"manila-db-sync-jspmg\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.203734 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqnn\" (UniqueName: \"kubernetes.io/projected/a1890e68-1a9c-4180-b989-6e178510e23b-kube-api-access-zpqnn\") pod \"barbican-db-sync-vn2mf\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.206909 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58bf7c5879-fzhzx"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.208710 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.209408 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdcv\" (UniqueName: \"kubernetes.io/projected/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-kube-api-access-6gdcv\") pod \"cinder-db-sync-mcqm2\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.218782 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-4zcd4"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.224934 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.226909 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.229015 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.230248 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.230348 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.230474 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dmxj" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.230445 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.231028 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.244577 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.251049 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" event={"ID":"f64de676-f91d-4ea9-8271-35fee04008ff","Type":"ContainerStarted","Data":"92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c"} Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.251089 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" event={"ID":"f64de676-f91d-4ea9-8271-35fee04008ff","Type":"ContainerStarted","Data":"22d825d02248a7170bb998697f34934f9cc5e73590fae0641fe8016649fbe03d"} Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.265989 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-scripts\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266051 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-logs\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266070 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-config-data\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266088 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/7570ae92-0828-4f93-9926-cbc4821b37f8-kube-api-access-nrt5f\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266127 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-combined-ca-bundle\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266143 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w42rp\" (UniqueName: \"kubernetes.io/projected/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-kube-api-access-w42rp\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266184 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-config-data\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266213 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7570ae92-0828-4f93-9926-cbc4821b37f8-horizon-secret-key\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266231 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7570ae92-0828-4f93-9926-cbc4821b37f8-logs\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.266248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-scripts\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.332049 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.353247 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-logs\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367438 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367461 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-config-data\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/7570ae92-0828-4f93-9926-cbc4821b37f8-kube-api-access-nrt5f\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367543 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdtg\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-kube-api-access-7pdtg\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367583 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-combined-ca-bundle\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367597 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-config\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w42rp\" (UniqueName: \"kubernetes.io/projected/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-kube-api-access-w42rp\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367629 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367650 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367670 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-logs\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-config-data\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367729 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-ceph\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367746 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7570ae92-0828-4f93-9926-cbc4821b37f8-horizon-secret-key\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367783 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7570ae92-0828-4f93-9926-cbc4821b37f8-logs\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367800 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-scripts\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367828 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjt7\" (UniqueName: \"kubernetes.io/projected/fca0fb47-7f62-4319-9852-c883684729e7-kube-api-access-qwjt7\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367846 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367864 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-scripts\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367914 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.367931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.368469 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-logs\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.370304 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-config-data\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.372840 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7570ae92-0828-4f93-9926-cbc4821b37f8-logs\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.373216 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-scripts\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.376941 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-config-data\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.377029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-combined-ca-bundle\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.383073 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-scripts\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.386526 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7570ae92-0828-4f93-9926-cbc4821b37f8-horizon-secret-key\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.391214 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w42rp\" (UniqueName: \"kubernetes.io/projected/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-kube-api-access-w42rp\") pod \"placement-db-sync-j9h6k\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.392685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/7570ae92-0828-4f93-9926-cbc4821b37f8-kube-api-access-nrt5f\") pod \"horizon-58bf7c5879-fzhzx\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.393461 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jspmg" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475303 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-config\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475357 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475398 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475435 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-logs\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475522 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-ceph\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475547 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwjt7\" (UniqueName: \"kubernetes.io/projected/fca0fb47-7f62-4319-9852-c883684729e7-kube-api-access-qwjt7\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475641 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475702 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475722 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475756 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475806 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475834 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.475857 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdtg\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-kube-api-access-7pdtg\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.476323 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.476426 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-config\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.476569 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.476721 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.477837 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.485911 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-ceph\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.486286 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-config-data\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.487610 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.489451 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.489562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.489661 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-logs\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.500342 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.501306 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdtg\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-kube-api-access-7pdtg\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.501529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-scripts\") pod \"glance-default-external-api-0\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.508040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwjt7\" (UniqueName: \"kubernetes.io/projected/fca0fb47-7f62-4319-9852-c883684729e7-kube-api-access-qwjt7\") pod \"dnsmasq-dns-8b5c85b87-4zcd4\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.540305 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.540344 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9h6k" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.591219 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-57pls"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.623177 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.638011 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.826153 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d22dh"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.866794 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.876208 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.880559 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.882544 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994080 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxds\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-kube-api-access-6nxds\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994185 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994211 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994240 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-logs\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994349 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:42 crc kubenswrapper[4720]: I0202 09:15:42.994902 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6zn5f"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.008183 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.017277 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.022998 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-795577799f-bc22j"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.095187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-nb\") pod \"f64de676-f91d-4ea9-8271-35fee04008ff\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.095588 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-sb\") pod \"f64de676-f91d-4ea9-8271-35fee04008ff\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.095705 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-config\") pod \"f64de676-f91d-4ea9-8271-35fee04008ff\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.095739 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-svc\") pod \"f64de676-f91d-4ea9-8271-35fee04008ff\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.095784 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/f64de676-f91d-4ea9-8271-35fee04008ff-kube-api-access-p7r5t\") pod \"f64de676-f91d-4ea9-8271-35fee04008ff\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.095819 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-swift-storage-0\") pod \"f64de676-f91d-4ea9-8271-35fee04008ff\" (UID: \"f64de676-f91d-4ea9-8271-35fee04008ff\") " Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096196 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096245 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxds\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-kube-api-access-6nxds\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096273 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096296 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096329 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096382 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-logs\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096397 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.096762 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: W0202 09:15:43.095206 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a1ffe6_f27c_4751_9872_186b2010e2f0.slice/crio-7a4a373025a1272adfef77bafb8a8c887d4c9ced77f124421c5b1e5a39b15f7f WatchSource:0}: Error finding container 7a4a373025a1272adfef77bafb8a8c887d4c9ced77f124421c5b1e5a39b15f7f: Status 404 returned error can't find the container with id 7a4a373025a1272adfef77bafb8a8c887d4c9ced77f124421c5b1e5a39b15f7f Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.101834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.101995 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mcqm2"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.105158 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-logs\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.112294 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.113748 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64de676-f91d-4ea9-8271-35fee04008ff-kube-api-access-p7r5t" (OuterVolumeSpecName: "kube-api-access-p7r5t") pod "f64de676-f91d-4ea9-8271-35fee04008ff" (UID: "f64de676-f91d-4ea9-8271-35fee04008ff"). InnerVolumeSpecName "kube-api-access-p7r5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.115074 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.117420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: W0202 09:15:43.119143 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod691b5691_2178_4f8e_a40c_7dfe5bec0f1b.slice/crio-013ee3c92d9ffd4203dbe0410d3203a9f8dc28be48b73e93dc5e8b55559503f0 WatchSource:0}: Error finding container 013ee3c92d9ffd4203dbe0410d3203a9f8dc28be48b73e93dc5e8b55559503f0: Status 404 returned error can't find the container with id 013ee3c92d9ffd4203dbe0410d3203a9f8dc28be48b73e93dc5e8b55559503f0 Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.120519 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.128727 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxds\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-kube-api-access-6nxds\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.184163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.213676 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/f64de676-f91d-4ea9-8271-35fee04008ff-kube-api-access-p7r5t\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.220075 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f64de676-f91d-4ea9-8271-35fee04008ff" (UID: "f64de676-f91d-4ea9-8271-35fee04008ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.226193 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f64de676-f91d-4ea9-8271-35fee04008ff" (UID: "f64de676-f91d-4ea9-8271-35fee04008ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.241585 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f64de676-f91d-4ea9-8271-35fee04008ff" (UID: "f64de676-f91d-4ea9-8271-35fee04008ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.244577 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f64de676-f91d-4ea9-8271-35fee04008ff" (UID: "f64de676-f91d-4ea9-8271-35fee04008ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.252753 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-config" (OuterVolumeSpecName: "config") pod "f64de676-f91d-4ea9-8271-35fee04008ff" (UID: "f64de676-f91d-4ea9-8271-35fee04008ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.270545 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcqm2" event={"ID":"691b5691-2178-4f8e-a40c-7dfe5bec0f1b","Type":"ContainerStarted","Data":"013ee3c92d9ffd4203dbe0410d3203a9f8dc28be48b73e93dc5e8b55559503f0"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.273411 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zn5f" event={"ID":"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2","Type":"ContainerStarted","Data":"94f820e021a141ee8bb5bd22158af6472ea55ab91758e8a1a4e6b415d72f7bf5"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.277461 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d22dh" event={"ID":"0c592fdd-1fe3-4b3b-8339-2f10aef86f59","Type":"ContainerStarted","Data":"b0bde48a2ccb6ab3ad673c81afb322a53cecf78add3a05cb02201218fb3e525c"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.283939 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795577799f-bc22j" event={"ID":"068d02fb-f08d-4ac1-b120-c62f96ff520d","Type":"ContainerStarted","Data":"82ffaad8bf5989b77345fb2df4e23b945a7d7f49bd6682a0dd9ab4894c5258cf"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.287929 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerStarted","Data":"7a4a373025a1272adfef77bafb8a8c887d4c9ced77f124421c5b1e5a39b15f7f"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.292867 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" event={"ID":"f980ced9-f817-4da9-8b73-ec172d5cb8b7","Type":"ContainerStarted","Data":"51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.292930 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" event={"ID":"f980ced9-f817-4da9-8b73-ec172d5cb8b7","Type":"ContainerStarted","Data":"64688e5d02d9d62807b317d8c925b7d07f5572892411135ff7dd248a810d9157"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.300473 4720 generic.go:334] "Generic (PLEG): container finished" podID="f64de676-f91d-4ea9-8271-35fee04008ff" containerID="92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c" exitCode=0 Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.300520 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" event={"ID":"f64de676-f91d-4ea9-8271-35fee04008ff","Type":"ContainerDied","Data":"92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.300548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" event={"ID":"f64de676-f91d-4ea9-8271-35fee04008ff","Type":"ContainerDied","Data":"22d825d02248a7170bb998697f34934f9cc5e73590fae0641fe8016649fbe03d"} Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.300566 4720 scope.go:117] "RemoveContainer" containerID="92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.300702 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-kl2zq" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.319779 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.319818 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.319851 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.319863 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.319873 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f64de676-f91d-4ea9-8271-35fee04008ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.349451 4720 scope.go:117] "RemoveContainer" containerID="92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c" Feb 02 09:15:43 crc kubenswrapper[4720]: E0202 09:15:43.349816 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c\": container with ID starting with 92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c not found: ID does not exist" containerID="92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.349839 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c"} err="failed to get container status \"92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c\": rpc error: code = NotFound desc = could not find container \"92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c\": container with ID starting with 92cc1c161b0e2c27cf815a397b24542c3ccd9c1ef856cb2b2add2bf62fc7e71c not found: ID does not exist" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.368309 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.398446 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vn2mf"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.406917 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kl2zq"] Feb 02 09:15:43 crc kubenswrapper[4720]: W0202 09:15:43.432790 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfca0fb47_7f62_4319_9852_c883684729e7.slice/crio-e4f1d1cf9907425053e40f111aee078639a1691c9913c136019ad63997390c56 WatchSource:0}: Error finding container e4f1d1cf9907425053e40f111aee078639a1691c9913c136019ad63997390c56: Status 404 returned error can't find the container with id e4f1d1cf9907425053e40f111aee078639a1691c9913c136019ad63997390c56 Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.436937 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-kl2zq"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.462259 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-4zcd4"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.480482 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58bf7c5879-fzhzx"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.488377 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9h6k"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.570033 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jspmg"] Feb 02 09:15:43 crc kubenswrapper[4720]: I0202 09:15:43.759355 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.091995 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.100276 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.148646 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-config\") pod \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.148721 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-nb\") pod \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.148845 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-swift-storage-0\") pod \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.148905 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-sb\") pod \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.148958 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcnbr\" (UniqueName: \"kubernetes.io/projected/f980ced9-f817-4da9-8b73-ec172d5cb8b7-kube-api-access-dcnbr\") pod \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.148976 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-svc\") pod \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\" (UID: \"f980ced9-f817-4da9-8b73-ec172d5cb8b7\") " Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.170251 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58bf7c5879-fzhzx"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.235417 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f980ced9-f817-4da9-8b73-ec172d5cb8b7" (UID: "f980ced9-f817-4da9-8b73-ec172d5cb8b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.235549 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f980ced9-f817-4da9-8b73-ec172d5cb8b7-kube-api-access-dcnbr" (OuterVolumeSpecName: "kube-api-access-dcnbr") pod "f980ced9-f817-4da9-8b73-ec172d5cb8b7" (UID: "f980ced9-f817-4da9-8b73-ec172d5cb8b7"). InnerVolumeSpecName "kube-api-access-dcnbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.252839 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-config" (OuterVolumeSpecName: "config") pod "f980ced9-f817-4da9-8b73-ec172d5cb8b7" (UID: "f980ced9-f817-4da9-8b73-ec172d5cb8b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.253078 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.253097 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcnbr\" (UniqueName: \"kubernetes.io/projected/f980ced9-f817-4da9-8b73-ec172d5cb8b7-kube-api-access-dcnbr\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.253114 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.286543 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f980ced9-f817-4da9-8b73-ec172d5cb8b7" (UID: "f980ced9-f817-4da9-8b73-ec172d5cb8b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.291691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f980ced9-f817-4da9-8b73-ec172d5cb8b7" (UID: "f980ced9-f817-4da9-8b73-ec172d5cb8b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.296426 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dfc4548bf-fnggh"] Feb 02 09:15:44 crc kubenswrapper[4720]: E0202 09:15:44.297170 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f980ced9-f817-4da9-8b73-ec172d5cb8b7" containerName="init" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.297189 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f980ced9-f817-4da9-8b73-ec172d5cb8b7" containerName="init" Feb 02 09:15:44 crc kubenswrapper[4720]: E0202 09:15:44.297207 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64de676-f91d-4ea9-8271-35fee04008ff" containerName="init" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.297246 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64de676-f91d-4ea9-8271-35fee04008ff" containerName="init" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.297547 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f980ced9-f817-4da9-8b73-ec172d5cb8b7" containerName="init" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.297585 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64de676-f91d-4ea9-8271-35fee04008ff" containerName="init" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.298973 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.310862 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f980ced9-f817-4da9-8b73-ec172d5cb8b7" (UID: "f980ced9-f817-4da9-8b73-ec172d5cb8b7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.331363 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dfc4548bf-fnggh"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.371379 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.371420 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.371430 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f980ced9-f817-4da9-8b73-ec172d5cb8b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.404505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22832dbe-5e5a-4767-b663-bd2134be39a4","Type":"ContainerStarted","Data":"413ae6cc07aa1df014048ea0f075ff52aa1a205ab1221c172fdf126bb5691eb0"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.411486 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9h6k" event={"ID":"3cf88a12-cd68-4b5c-a7b1-ad649a75791e","Type":"ContainerStarted","Data":"98db2235ccfd2c5839d388f20eb0332dfa4f83a0a7e73ab2f68b0d99f2af972a"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.416057 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.417633 4720 generic.go:334] "Generic (PLEG): container finished" podID="fca0fb47-7f62-4319-9852-c883684729e7" containerID="fd7339bc42b6b01336205b2038b5f1391c56d7e1e125f04a2f63678e529f8dd2" exitCode=0 Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.417704 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" event={"ID":"fca0fb47-7f62-4319-9852-c883684729e7","Type":"ContainerDied","Data":"fd7339bc42b6b01336205b2038b5f1391c56d7e1e125f04a2f63678e529f8dd2"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.417728 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" event={"ID":"fca0fb47-7f62-4319-9852-c883684729e7","Type":"ContainerStarted","Data":"e4f1d1cf9907425053e40f111aee078639a1691c9913c136019ad63997390c56"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.442586 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vn2mf" event={"ID":"a1890e68-1a9c-4180-b989-6e178510e23b","Type":"ContainerStarted","Data":"0f1e868e6b0e6cc600ef710c3184e10fab6f944ca9f06e6419af0717ae3f6734"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.448027 4720 generic.go:334] "Generic (PLEG): container finished" podID="f980ced9-f817-4da9-8b73-ec172d5cb8b7" containerID="51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af" exitCode=0 Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.448074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" event={"ID":"f980ced9-f817-4da9-8b73-ec172d5cb8b7","Type":"ContainerDied","Data":"51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.448094 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" event={"ID":"f980ced9-f817-4da9-8b73-ec172d5cb8b7","Type":"ContainerDied","Data":"64688e5d02d9d62807b317d8c925b7d07f5572892411135ff7dd248a810d9157"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.448110 4720 scope.go:117] "RemoveContainer" containerID="51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.448243 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-57pls" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.462676 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jspmg" event={"ID":"1a624e5d-098a-44e1-95b7-fa398979891a","Type":"ContainerStarted","Data":"6d61a41204909a92d6d125235b52bc4890d4304f930ac62ca4ae189ce1e5a3a3"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.470410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zn5f" event={"ID":"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2","Type":"ContainerStarted","Data":"ad2ffeb17c52f830cd9a6b29f456a54fc38186d37726cefbead6ea36b126f749"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.472577 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-scripts\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.472608 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rms\" (UniqueName: \"kubernetes.io/projected/87fe795c-be20-481f-bbb4-142eb6642b99-kube-api-access-f4rms\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.472676 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/87fe795c-be20-481f-bbb4-142eb6642b99-horizon-secret-key\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.472713 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87fe795c-be20-481f-bbb4-142eb6642b99-logs\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.472824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-config-data\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.478782 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.486058 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58bf7c5879-fzhzx" event={"ID":"7570ae92-0828-4f93-9926-cbc4821b37f8","Type":"ContainerStarted","Data":"4df0c4992e8ec0b00fd836fbaae94be8666525d14ccd4e1f143e64810bf2762e"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.491908 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d22dh" event={"ID":"0c592fdd-1fe3-4b3b-8339-2f10aef86f59","Type":"ContainerStarted","Data":"0efd010adde000065c9453bd87f3e3c8b0aa22b6807f92c52330e4c3389b9017"} Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.569509 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.574276 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-scripts\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.574321 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rms\" (UniqueName: \"kubernetes.io/projected/87fe795c-be20-481f-bbb4-142eb6642b99-kube-api-access-f4rms\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.574380 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/87fe795c-be20-481f-bbb4-142eb6642b99-horizon-secret-key\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.574411 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87fe795c-be20-481f-bbb4-142eb6642b99-logs\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.574504 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-config-data\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.576492 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-scripts\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.576706 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87fe795c-be20-481f-bbb4-142eb6642b99-logs\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.577570 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-config-data\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.578824 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/87fe795c-be20-481f-bbb4-142eb6642b99-horizon-secret-key\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.602268 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rms\" (UniqueName: \"kubernetes.io/projected/87fe795c-be20-481f-bbb4-142eb6642b99-kube-api-access-f4rms\") pod \"horizon-5dfc4548bf-fnggh\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.638162 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6zn5f" podStartSLOduration=3.638143292 podStartE2EDuration="3.638143292s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:44.488610526 +0000 UTC m=+1178.344236082" watchObservedRunningTime="2026-02-02 09:15:44.638143292 +0000 UTC m=+1178.493768848" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.656415 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-57pls"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.664556 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d22dh" podStartSLOduration=3.664538602 podStartE2EDuration="3.664538602s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:44.54553673 +0000 UTC m=+1178.401162286" watchObservedRunningTime="2026-02-02 09:15:44.664538602 +0000 UTC m=+1178.520164158" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.665002 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-57pls"] Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.695568 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.737639 4720 scope.go:117] "RemoveContainer" containerID="51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af" Feb 02 09:15:44 crc kubenswrapper[4720]: E0202 09:15:44.738689 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af\": container with ID starting with 51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af not found: ID does not exist" containerID="51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.738734 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af"} err="failed to get container status \"51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af\": rpc error: code = NotFound desc = could not find container \"51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af\": container with ID starting with 51bef851032659d7b724e73327be89486ff0084848c3b997d5fbde5f1d9286af not found: ID does not exist" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.904844 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64de676-f91d-4ea9-8271-35fee04008ff" path="/var/lib/kubelet/pods/f64de676-f91d-4ea9-8271-35fee04008ff/volumes" Feb 02 09:15:44 crc kubenswrapper[4720]: I0202 09:15:44.906051 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f980ced9-f817-4da9-8b73-ec172d5cb8b7" path="/var/lib/kubelet/pods/f980ced9-f817-4da9-8b73-ec172d5cb8b7/volumes" Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.399180 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dfc4548bf-fnggh"] Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.553267 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22832dbe-5e5a-4767-b663-bd2134be39a4","Type":"ContainerStarted","Data":"56fee2f47f14e6d050aa50916d9d1b1955923623eb004091554d62cd8127d681"} Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.556968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfc4548bf-fnggh" event={"ID":"87fe795c-be20-481f-bbb4-142eb6642b99","Type":"ContainerStarted","Data":"fa31ae5a8fd13035a5ead1021d75abe6661db64d06ec71816982cf21b9b21429"} Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.569336 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24d70fb8-5370-46b2-89e3-e3c574d4b16c","Type":"ContainerStarted","Data":"d2921a259f3cd132094675be2cd671662cbf7c9a5d5bf503363b00a7cccaad98"} Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.576957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" event={"ID":"fca0fb47-7f62-4319-9852-c883684729e7","Type":"ContainerStarted","Data":"861aa861dc0ff938c3fd4f3325f0e6ff78768019a8684598dea31b2ca7b19bba"} Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.577658 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:45 crc kubenswrapper[4720]: I0202 09:15:45.608728 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" podStartSLOduration=4.608711447 podStartE2EDuration="4.608711447s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:45.597131868 +0000 UTC m=+1179.452757424" watchObservedRunningTime="2026-02-02 09:15:45.608711447 +0000 UTC m=+1179.464337003" Feb 02 09:15:46 crc kubenswrapper[4720]: I0202 09:15:46.592779 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22832dbe-5e5a-4767-b663-bd2134be39a4","Type":"ContainerStarted","Data":"0692d394595b152e48d284292ca424cd359bcca08ed95936502e54fa94a7b95a"} Feb 02 09:15:46 crc kubenswrapper[4720]: I0202 09:15:46.593700 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-log" containerID="cri-o://56fee2f47f14e6d050aa50916d9d1b1955923623eb004091554d62cd8127d681" gracePeriod=30 Feb 02 09:15:46 crc kubenswrapper[4720]: I0202 09:15:46.594030 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-httpd" containerID="cri-o://0692d394595b152e48d284292ca424cd359bcca08ed95936502e54fa94a7b95a" gracePeriod=30 Feb 02 09:15:46 crc kubenswrapper[4720]: I0202 09:15:46.600666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24d70fb8-5370-46b2-89e3-e3c574d4b16c","Type":"ContainerStarted","Data":"c1a24f3b7fe8ee08dc815516a408f5fb9ced2b6dd84c60f8d53b92b7606b4ba1"} Feb 02 09:15:46 crc kubenswrapper[4720]: I0202 09:15:46.631117 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.6310995219999995 podStartE2EDuration="4.631099522s" podCreationTimestamp="2026-02-02 09:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:46.616389244 +0000 UTC m=+1180.472014800" watchObservedRunningTime="2026-02-02 09:15:46.631099522 +0000 UTC m=+1180.486725078" Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.612651 4720 generic.go:334] "Generic (PLEG): container finished" podID="0c592fdd-1fe3-4b3b-8339-2f10aef86f59" containerID="0efd010adde000065c9453bd87f3e3c8b0aa22b6807f92c52330e4c3389b9017" exitCode=0 Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.612732 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d22dh" event={"ID":"0c592fdd-1fe3-4b3b-8339-2f10aef86f59","Type":"ContainerDied","Data":"0efd010adde000065c9453bd87f3e3c8b0aa22b6807f92c52330e4c3389b9017"} Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.618675 4720 generic.go:334] "Generic (PLEG): container finished" podID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerID="0692d394595b152e48d284292ca424cd359bcca08ed95936502e54fa94a7b95a" exitCode=143 Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.618722 4720 generic.go:334] "Generic (PLEG): container finished" podID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerID="56fee2f47f14e6d050aa50916d9d1b1955923623eb004091554d62cd8127d681" exitCode=143 Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.618742 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22832dbe-5e5a-4767-b663-bd2134be39a4","Type":"ContainerDied","Data":"0692d394595b152e48d284292ca424cd359bcca08ed95936502e54fa94a7b95a"} Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.618787 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22832dbe-5e5a-4767-b663-bd2134be39a4","Type":"ContainerDied","Data":"56fee2f47f14e6d050aa50916d9d1b1955923623eb004091554d62cd8127d681"} Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.622223 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24d70fb8-5370-46b2-89e3-e3c574d4b16c","Type":"ContainerStarted","Data":"4d8bc4c64dc1fb970d1347fac50d69ef67a017274b4d969460ab8afbed98dbf1"} Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.622417 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-httpd" containerID="cri-o://4d8bc4c64dc1fb970d1347fac50d69ef67a017274b4d969460ab8afbed98dbf1" gracePeriod=30 Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.622446 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-log" containerID="cri-o://c1a24f3b7fe8ee08dc815516a408f5fb9ced2b6dd84c60f8d53b92b7606b4ba1" gracePeriod=30 Feb 02 09:15:47 crc kubenswrapper[4720]: I0202 09:15:47.658816 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.658790046 podStartE2EDuration="6.658790046s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:15:47.646848049 +0000 UTC m=+1181.502473605" watchObservedRunningTime="2026-02-02 09:15:47.658790046 +0000 UTC m=+1181.514415602" Feb 02 09:15:48 crc kubenswrapper[4720]: I0202 09:15:48.643794 4720 generic.go:334] "Generic (PLEG): container finished" podID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerID="4d8bc4c64dc1fb970d1347fac50d69ef67a017274b4d969460ab8afbed98dbf1" exitCode=0 Feb 02 09:15:48 crc kubenswrapper[4720]: I0202 09:15:48.644143 4720 generic.go:334] "Generic (PLEG): container finished" podID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerID="c1a24f3b7fe8ee08dc815516a408f5fb9ced2b6dd84c60f8d53b92b7606b4ba1" exitCode=143 Feb 02 09:15:48 crc kubenswrapper[4720]: I0202 09:15:48.643851 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24d70fb8-5370-46b2-89e3-e3c574d4b16c","Type":"ContainerDied","Data":"4d8bc4c64dc1fb970d1347fac50d69ef67a017274b4d969460ab8afbed98dbf1"} Feb 02 09:15:48 crc kubenswrapper[4720]: I0202 09:15:48.644246 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24d70fb8-5370-46b2-89e3-e3c574d4b16c","Type":"ContainerDied","Data":"c1a24f3b7fe8ee08dc815516a408f5fb9ced2b6dd84c60f8d53b92b7606b4ba1"} Feb 02 09:15:52 crc kubenswrapper[4720]: I0202 09:15:52.625129 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:15:52 crc kubenswrapper[4720]: I0202 09:15:52.736307 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-rrrtl"] Feb 02 09:15:52 crc kubenswrapper[4720]: I0202 09:15:52.736569 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" containerID="cri-o://447230fc15143f275a6a6aeff27501d7c7982f69bf7a4dbac26da477afe8e8db" gracePeriod=10 Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.322701 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-795577799f-bc22j"] Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.359203 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b896b6bb4-gxblv"] Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.360957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.363334 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.372023 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b896b6bb4-gxblv"] Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.461705 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dfc4548bf-fnggh"] Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.470804 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86d4c4b4d8-gbbkh"] Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.472208 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.487417 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86d4c4b4d8-gbbkh"] Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.506965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-scripts\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.507367 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-combined-ca-bundle\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.507404 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-config-data\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.507488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-secret-key\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.507530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7ea3e29-f479-4d19-9200-476ab329c100-logs\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.507557 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-tls-certs\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.507585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d62b\" (UniqueName: \"kubernetes.io/projected/e7ea3e29-f479-4d19-9200-476ab329c100-kube-api-access-9d62b\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.609357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-logs\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.609400 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zdg\" (UniqueName: \"kubernetes.io/projected/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-kube-api-access-r6zdg\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.609493 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-secret-key\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611102 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7ea3e29-f479-4d19-9200-476ab329c100-logs\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611140 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-tls-certs\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d62b\" (UniqueName: \"kubernetes.io/projected/e7ea3e29-f479-4d19-9200-476ab329c100-kube-api-access-9d62b\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611190 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-combined-ca-bundle\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-config-data\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611297 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-scripts\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611324 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-horizon-tls-certs\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611363 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-combined-ca-bundle\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611391 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-config-data\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611556 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-scripts\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.611595 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-horizon-secret-key\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.612134 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-scripts\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.612362 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7ea3e29-f479-4d19-9200-476ab329c100-logs\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.613245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-config-data\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.616232 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-tls-certs\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.621245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-combined-ca-bundle\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.629124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-secret-key\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.633088 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d62b\" (UniqueName: \"kubernetes.io/projected/e7ea3e29-f479-4d19-9200-476ab329c100-kube-api-access-9d62b\") pod \"horizon-5b896b6bb4-gxblv\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.682458 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.712957 4720 generic.go:334] "Generic (PLEG): container finished" podID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerID="447230fc15143f275a6a6aeff27501d7c7982f69bf7a4dbac26da477afe8e8db" exitCode=0 Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713023 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" event={"ID":"809bb436-ed06-47de-aa07-670cf4f4ef8e","Type":"ContainerDied","Data":"447230fc15143f275a6a6aeff27501d7c7982f69bf7a4dbac26da477afe8e8db"} Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-logs\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zdg\" (UniqueName: \"kubernetes.io/projected/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-kube-api-access-r6zdg\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713278 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-combined-ca-bundle\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-config-data\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713323 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-horizon-tls-certs\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-scripts\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713472 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-horizon-secret-key\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.713717 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-logs\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.715155 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-config-data\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.716658 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-scripts\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.718116 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-horizon-secret-key\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.720847 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-horizon-tls-certs\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.720975 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-combined-ca-bundle\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.730203 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zdg\" (UniqueName: \"kubernetes.io/projected/8c4ce7a3-3e40-463d-b5f9-95b3352960f2-kube-api-access-r6zdg\") pod \"horizon-86d4c4b4d8-gbbkh\" (UID: \"8c4ce7a3-3e40-463d-b5f9-95b3352960f2\") " pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:53 crc kubenswrapper[4720]: I0202 09:15:53.897712 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:15:55 crc kubenswrapper[4720]: I0202 09:15:55.644249 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Feb 02 09:15:58 crc kubenswrapper[4720]: E0202 09:15:58.546902 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 02 09:15:58 crc kubenswrapper[4720]: E0202 09:15:58.547475 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7fh69h5h68fh58dh686hffh7dh5f4h645h679h559h568h5f7h78h5bfh567h5b4h547h68fh586h67h676hddh77h666h56fhd9hd8hcbh65h5dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl9tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-795577799f-bc22j_openstack(068d02fb-f08d-4ac1-b120-c62f96ff520d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:15:58 crc kubenswrapper[4720]: E0202 09:15:58.550158 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-795577799f-bc22j" podUID="068d02fb-f08d-4ac1-b120-c62f96ff520d" Feb 02 09:15:59 crc kubenswrapper[4720]: I0202 09:15:59.771850 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" containerID="ad2ffeb17c52f830cd9a6b29f456a54fc38186d37726cefbead6ea36b126f749" exitCode=0 Feb 02 09:15:59 crc kubenswrapper[4720]: I0202 09:15:59.771969 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zn5f" event={"ID":"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2","Type":"ContainerDied","Data":"ad2ffeb17c52f830cd9a6b29f456a54fc38186d37726cefbead6ea36b126f749"} Feb 02 09:16:00 crc kubenswrapper[4720]: I0202 09:16:00.645004 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Feb 02 09:16:03 crc kubenswrapper[4720]: E0202 09:16:03.644199 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 02 09:16:03 crc kubenswrapper[4720]: E0202 09:16:03.645087 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h554h5dfh66bh75h78h569hfch5b9hb4h5c8h64fh5d8h54dh5f5h5f8h55h7h54h5ddh5fdh64hd8h6bh555h676h7dh64dh5b5hc8h565hcfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4rms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dfc4548bf-fnggh_openstack(87fe795c-be20-481f-bbb4-142eb6642b99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:16:03 crc kubenswrapper[4720]: E0202 09:16:03.648946 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5dfc4548bf-fnggh" podUID="87fe795c-be20-481f-bbb4-142eb6642b99" Feb 02 09:16:03 crc kubenswrapper[4720]: E0202 09:16:03.661765 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 02 09:16:03 crc kubenswrapper[4720]: E0202 09:16:03.662038 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n588h55ch6bh77h7h595h67bh5b5h65bh5fh554h674h696h75h7fh5b9h64h7dh59bh56fh5d7h64dhf9h96h5d5h65ch558h5ddh669h88h4hb8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrt5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-58bf7c5879-fzhzx_openstack(7570ae92-0828-4f93-9926-cbc4821b37f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:16:03 crc kubenswrapper[4720]: E0202 09:16:03.667183 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-58bf7c5879-fzhzx" podUID="7570ae92-0828-4f93-9926-cbc4821b37f8" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.775381 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.815075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d22dh" event={"ID":"0c592fdd-1fe3-4b3b-8339-2f10aef86f59","Type":"ContainerDied","Data":"b0bde48a2ccb6ab3ad673c81afb322a53cecf78add3a05cb02201218fb3e525c"} Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.815119 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0bde48a2ccb6ab3ad673c81afb322a53cecf78add3a05cb02201218fb3e525c" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.815499 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d22dh" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.895309 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-config-data\") pod \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.895516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-credential-keys\") pod \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.895578 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-fernet-keys\") pod \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.895614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9pmg\" (UniqueName: \"kubernetes.io/projected/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-kube-api-access-g9pmg\") pod \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.895649 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-combined-ca-bundle\") pod \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.895683 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-scripts\") pod \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\" (UID: \"0c592fdd-1fe3-4b3b-8339-2f10aef86f59\") " Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.903113 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0c592fdd-1fe3-4b3b-8339-2f10aef86f59" (UID: "0c592fdd-1fe3-4b3b-8339-2f10aef86f59"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.904672 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-scripts" (OuterVolumeSpecName: "scripts") pod "0c592fdd-1fe3-4b3b-8339-2f10aef86f59" (UID: "0c592fdd-1fe3-4b3b-8339-2f10aef86f59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.908221 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0c592fdd-1fe3-4b3b-8339-2f10aef86f59" (UID: "0c592fdd-1fe3-4b3b-8339-2f10aef86f59"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.912122 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-kube-api-access-g9pmg" (OuterVolumeSpecName: "kube-api-access-g9pmg") pod "0c592fdd-1fe3-4b3b-8339-2f10aef86f59" (UID: "0c592fdd-1fe3-4b3b-8339-2f10aef86f59"). InnerVolumeSpecName "kube-api-access-g9pmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.925953 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-config-data" (OuterVolumeSpecName: "config-data") pod "0c592fdd-1fe3-4b3b-8339-2f10aef86f59" (UID: "0c592fdd-1fe3-4b3b-8339-2f10aef86f59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.952820 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c592fdd-1fe3-4b3b-8339-2f10aef86f59" (UID: "0c592fdd-1fe3-4b3b-8339-2f10aef86f59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.998022 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.998276 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.998289 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9pmg\" (UniqueName: \"kubernetes.io/projected/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-kube-api-access-g9pmg\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.998298 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.998306 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:03 crc kubenswrapper[4720]: I0202 09:16:03.998316 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c592fdd-1fe3-4b3b-8339-2f10aef86f59-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.868832 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d22dh"] Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.880543 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d22dh"] Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.898618 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c592fdd-1fe3-4b3b-8339-2f10aef86f59" path="/var/lib/kubelet/pods/0c592fdd-1fe3-4b3b-8339-2f10aef86f59/volumes" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.961417 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mz5n2"] Feb 02 09:16:04 crc kubenswrapper[4720]: E0202 09:16:04.967268 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c592fdd-1fe3-4b3b-8339-2f10aef86f59" containerName="keystone-bootstrap" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.967426 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c592fdd-1fe3-4b3b-8339-2f10aef86f59" containerName="keystone-bootstrap" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.972442 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c592fdd-1fe3-4b3b-8339-2f10aef86f59" containerName="keystone-bootstrap" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.974191 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.978085 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.978153 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 09:16:04 crc kubenswrapper[4720]: I0202 09:16:04.978548 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.005764 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jsbgm" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.006146 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.007756 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mz5n2"] Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.119781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-credential-keys\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.119826 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-fernet-keys\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.119859 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-combined-ca-bundle\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.119890 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgn5x\" (UniqueName: \"kubernetes.io/projected/db3af89e-0227-4cd5-a546-b9ef7ec514a7-kube-api-access-tgn5x\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.119942 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-scripts\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.119994 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-config-data\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.221976 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-credential-keys\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.222032 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-fernet-keys\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.222092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-combined-ca-bundle\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.222119 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgn5x\" (UniqueName: \"kubernetes.io/projected/db3af89e-0227-4cd5-a546-b9ef7ec514a7-kube-api-access-tgn5x\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.222192 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-scripts\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.222267 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-config-data\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.228642 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-credential-keys\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.232141 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-combined-ca-bundle\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.235344 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-scripts\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.235670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-fernet-keys\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.235849 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-config-data\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.240138 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgn5x\" (UniqueName: \"kubernetes.io/projected/db3af89e-0227-4cd5-a546-b9ef7ec514a7-kube-api-access-tgn5x\") pod \"keystone-bootstrap-mz5n2\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:05 crc kubenswrapper[4720]: I0202 09:16:05.315816 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:10 crc kubenswrapper[4720]: I0202 09:16:10.644235 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Feb 02 09:16:10 crc kubenswrapper[4720]: I0202 09:16:10.645443 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.842167 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.853921 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795577799f-bc22j" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.858410 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.903572 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.923017 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795577799f-bc22j" event={"ID":"068d02fb-f08d-4ac1-b120-c62f96ff520d","Type":"ContainerDied","Data":"82ffaad8bf5989b77345fb2df4e23b945a7d7f49bd6682a0dd9ab4894c5258cf"} Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.923114 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795577799f-bc22j" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.929981 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24d70fb8-5370-46b2-89e3-e3c574d4b16c","Type":"ContainerDied","Data":"d2921a259f3cd132094675be2cd671662cbf7c9a5d5bf503363b00a7cccaad98"} Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.930031 4720 scope.go:117] "RemoveContainer" containerID="4d8bc4c64dc1fb970d1347fac50d69ef67a017274b4d969460ab8afbed98dbf1" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.930211 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.932830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6zn5f" event={"ID":"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2","Type":"ContainerDied","Data":"94f820e021a141ee8bb5bd22158af6472ea55ab91758e8a1a4e6b415d72f7bf5"} Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.932854 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f820e021a141ee8bb5bd22158af6472ea55ab91758e8a1a4e6b415d72f7bf5" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.932898 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6zn5f" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.937073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22832dbe-5e5a-4767-b663-bd2134be39a4","Type":"ContainerDied","Data":"413ae6cc07aa1df014048ea0f075ff52aa1a205ab1221c172fdf126bb5691eb0"} Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.940983 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946018 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-combined-ca-bundle\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946065 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-config-data\") pod \"068d02fb-f08d-4ac1-b120-c62f96ff520d\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946110 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-scripts\") pod \"068d02fb-f08d-4ac1-b120-c62f96ff520d\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946416 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946460 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-ceph\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-scripts\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nxds\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-kube-api-access-6nxds\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946571 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-config-data\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946595 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-logs\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946632 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-combined-ca-bundle\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946663 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-httpd-run\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946682 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/068d02fb-f08d-4ac1-b120-c62f96ff520d-horizon-secret-key\") pod \"068d02fb-f08d-4ac1-b120-c62f96ff520d\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946701 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-config-data\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946731 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-ceph\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946759 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdtg\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-kube-api-access-7pdtg\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946773 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-scripts\") pod \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\" (UID: \"24d70fb8-5370-46b2-89e3-e3c574d4b16c\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946797 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-httpd-run\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946816 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-logs\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946835 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068d02fb-f08d-4ac1-b120-c62f96ff520d-logs\") pod \"068d02fb-f08d-4ac1-b120-c62f96ff520d\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946868 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"22832dbe-5e5a-4767-b663-bd2134be39a4\" (UID: \"22832dbe-5e5a-4767-b663-bd2134be39a4\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.946902 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl9tk\" (UniqueName: \"kubernetes.io/projected/068d02fb-f08d-4ac1-b120-c62f96ff520d-kube-api-access-rl9tk\") pod \"068d02fb-f08d-4ac1-b120-c62f96ff520d\" (UID: \"068d02fb-f08d-4ac1-b120-c62f96ff520d\") " Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.947167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-scripts" (OuterVolumeSpecName: "scripts") pod "068d02fb-f08d-4ac1-b120-c62f96ff520d" (UID: "068d02fb-f08d-4ac1-b120-c62f96ff520d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.947373 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-config-data" (OuterVolumeSpecName: "config-data") pod "068d02fb-f08d-4ac1-b120-c62f96ff520d" (UID: "068d02fb-f08d-4ac1-b120-c62f96ff520d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.947595 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.949114 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-logs" (OuterVolumeSpecName: "logs") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.949530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068d02fb-f08d-4ac1-b120-c62f96ff520d-logs" (OuterVolumeSpecName: "logs") pod "068d02fb-f08d-4ac1-b120-c62f96ff520d" (UID: "068d02fb-f08d-4ac1-b120-c62f96ff520d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.952241 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-scripts" (OuterVolumeSpecName: "scripts") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.952256 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.952823 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.953189 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-ceph" (OuterVolumeSpecName: "ceph") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.953846 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-logs" (OuterVolumeSpecName: "logs") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.954333 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068d02fb-f08d-4ac1-b120-c62f96ff520d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "068d02fb-f08d-4ac1-b120-c62f96ff520d" (UID: "068d02fb-f08d-4ac1-b120-c62f96ff520d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.954527 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-kube-api-access-7pdtg" (OuterVolumeSpecName: "kube-api-access-7pdtg") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "kube-api-access-7pdtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.958738 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.959499 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068d02fb-f08d-4ac1-b120-c62f96ff520d-kube-api-access-rl9tk" (OuterVolumeSpecName: "kube-api-access-rl9tk") pod "068d02fb-f08d-4ac1-b120-c62f96ff520d" (UID: "068d02fb-f08d-4ac1-b120-c62f96ff520d"). InnerVolumeSpecName "kube-api-access-rl9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.975411 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-scripts" (OuterVolumeSpecName: "scripts") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.975417 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-kube-api-access-6nxds" (OuterVolumeSpecName: "kube-api-access-6nxds") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "kube-api-access-6nxds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.975465 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-ceph" (OuterVolumeSpecName: "ceph") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.986762 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:11 crc kubenswrapper[4720]: I0202 09:16:11.986934 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.006463 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-config-data" (OuterVolumeSpecName: "config-data") pod "24d70fb8-5370-46b2-89e3-e3c574d4b16c" (UID: "24d70fb8-5370-46b2-89e3-e3c574d4b16c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.010366 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-config-data" (OuterVolumeSpecName: "config-data") pod "22832dbe-5e5a-4767-b663-bd2134be39a4" (UID: "22832dbe-5e5a-4767-b663-bd2134be39a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048148 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-combined-ca-bundle\") pod \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048273 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-config\") pod \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048391 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f877s\" (UniqueName: \"kubernetes.io/projected/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-kube-api-access-f877s\") pod \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\" (UID: \"ce4af37e-f6d7-4a2a-acf1-82ed860df8f2\") " Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048923 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nxds\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-kube-api-access-6nxds\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048940 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048949 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048958 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048968 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d70fb8-5370-46b2-89e3-e3c574d4b16c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048976 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/068d02fb-f08d-4ac1-b120-c62f96ff520d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048984 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048991 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.048999 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pdtg\" (UniqueName: \"kubernetes.io/projected/22832dbe-5e5a-4767-b663-bd2134be39a4-kube-api-access-7pdtg\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049006 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d70fb8-5370-46b2-89e3-e3c574d4b16c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049014 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049021 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22832dbe-5e5a-4767-b663-bd2134be39a4-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049029 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/068d02fb-f08d-4ac1-b120-c62f96ff520d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049046 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049055 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl9tk\" (UniqueName: \"kubernetes.io/projected/068d02fb-f08d-4ac1-b120-c62f96ff520d-kube-api-access-rl9tk\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049063 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049071 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049079 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/068d02fb-f08d-4ac1-b120-c62f96ff520d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049092 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049101 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/24d70fb8-5370-46b2-89e3-e3c574d4b16c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.049108 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22832dbe-5e5a-4767-b663-bd2134be39a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.051774 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-kube-api-access-f877s" (OuterVolumeSpecName: "kube-api-access-f877s") pod "ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" (UID: "ce4af37e-f6d7-4a2a-acf1-82ed860df8f2"). InnerVolumeSpecName "kube-api-access-f877s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.066340 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.068064 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" (UID: "ce4af37e-f6d7-4a2a-acf1-82ed860df8f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.068339 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.076102 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-config" (OuterVolumeSpecName: "config") pod "ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" (UID: "ce4af37e-f6d7-4a2a-acf1-82ed860df8f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.150305 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.150332 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.150343 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.150353 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f877s\" (UniqueName: \"kubernetes.io/projected/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2-kube-api-access-f877s\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.150363 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.293038 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-795577799f-bc22j"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.304917 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-795577799f-bc22j"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.307200 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.322982 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.337958 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.345271 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.357810 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.358176 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-httpd" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358194 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-httpd" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.358215 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-httpd" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358222 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-httpd" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.358234 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" containerName="neutron-db-sync" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358241 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" containerName="neutron-db-sync" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.358251 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-log" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358258 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-log" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.358273 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-log" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358279 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-log" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358422 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" containerName="neutron-db-sync" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358436 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-httpd" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358449 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-httpd" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358457 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" containerName="glance-log" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.358465 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" containerName="glance-log" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.359755 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.361626 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.361792 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.361966 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.362030 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.362871 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dmxj" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.368978 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.371098 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.378797 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.379463 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.379821 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.381232 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.406172 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.406325 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpqnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vn2mf_openstack(a1890e68-1a9c-4180-b989-6e178510e23b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.407471 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vn2mf" podUID="a1890e68-1a9c-4180-b989-6e178510e23b" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455334 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455359 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455379 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5crx\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-kube-api-access-h5crx\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455400 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-ceph\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-logs\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455475 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455492 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455515 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-config-data\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455534 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455554 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmrc\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-kube-api-access-klmrc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455573 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455591 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-scripts\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455614 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455649 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.455677 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557121 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557222 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557355 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5crx\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-kube-api-access-h5crx\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557381 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-ceph\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557826 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557863 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-logs\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557916 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557936 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.557977 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-config-data\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558000 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558021 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klmrc\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-kube-api-access-klmrc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558055 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558075 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-scripts\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558097 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558151 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558169 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558754 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-logs\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558993 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.559209 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.558440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.560119 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.561033 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.561482 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.561516 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.561843 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-ceph\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.563441 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.563960 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-config-data\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.564843 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.565026 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.569691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-scripts\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.573525 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5crx\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-kube-api-access-h5crx\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.573609 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.577544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.578935 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klmrc\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-kube-api-access-klmrc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.588174 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.589120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.693178 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.702743 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.897485 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068d02fb-f08d-4ac1-b120-c62f96ff520d" path="/var/lib/kubelet/pods/068d02fb-f08d-4ac1-b120-c62f96ff520d/volumes" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.898183 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22832dbe-5e5a-4767-b663-bd2134be39a4" path="/var/lib/kubelet/pods/22832dbe-5e5a-4767-b663-bd2134be39a4/volumes" Feb 02 09:16:12 crc kubenswrapper[4720]: I0202 09:16:12.903642 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d70fb8-5370-46b2-89e3-e3c574d4b16c" path="/var/lib/kubelet/pods/24d70fb8-5370-46b2-89e3-e3c574d4b16c/volumes" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.968436 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vn2mf" podUID="a1890e68-1a9c-4180-b989-6e178510e23b" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.989721 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.989955 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm9rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-jspmg_openstack(1a624e5d-098a-44e1-95b7-fa398979891a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:16:12 crc kubenswrapper[4720]: E0202 09:16:12.991142 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-jspmg" podUID="1a624e5d-098a-44e1-95b7-fa398979891a" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.107851 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.116924 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.124812 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167291 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7570ae92-0828-4f93-9926-cbc4821b37f8-horizon-secret-key\") pod \"7570ae92-0828-4f93-9926-cbc4821b37f8\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167349 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-svc\") pod \"809bb436-ed06-47de-aa07-670cf4f4ef8e\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167370 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-config-data\") pod \"7570ae92-0828-4f93-9926-cbc4821b37f8\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167398 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-swift-storage-0\") pod \"809bb436-ed06-47de-aa07-670cf4f4ef8e\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167438 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-scripts\") pod \"7570ae92-0828-4f93-9926-cbc4821b37f8\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167476 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87fe795c-be20-481f-bbb4-142eb6642b99-logs\") pod \"87fe795c-be20-481f-bbb4-142eb6642b99\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167501 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/87fe795c-be20-481f-bbb4-142eb6642b99-horizon-secret-key\") pod \"87fe795c-be20-481f-bbb4-142eb6642b99\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167565 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/7570ae92-0828-4f93-9926-cbc4821b37f8-kube-api-access-nrt5f\") pod \"7570ae92-0828-4f93-9926-cbc4821b37f8\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167588 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7570ae92-0828-4f93-9926-cbc4821b37f8-logs\") pod \"7570ae92-0828-4f93-9926-cbc4821b37f8\" (UID: \"7570ae92-0828-4f93-9926-cbc4821b37f8\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167635 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-scripts\") pod \"87fe795c-be20-481f-bbb4-142eb6642b99\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167713 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-sb\") pod \"809bb436-ed06-47de-aa07-670cf4f4ef8e\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-config\") pod \"809bb436-ed06-47de-aa07-670cf4f4ef8e\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167781 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-config-data\") pod \"87fe795c-be20-481f-bbb4-142eb6642b99\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167803 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-nb\") pod \"809bb436-ed06-47de-aa07-670cf4f4ef8e\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167860 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4rms\" (UniqueName: \"kubernetes.io/projected/87fe795c-be20-481f-bbb4-142eb6642b99-kube-api-access-f4rms\") pod \"87fe795c-be20-481f-bbb4-142eb6642b99\" (UID: \"87fe795c-be20-481f-bbb4-142eb6642b99\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.167900 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f979k\" (UniqueName: \"kubernetes.io/projected/809bb436-ed06-47de-aa07-670cf4f4ef8e-kube-api-access-f979k\") pod \"809bb436-ed06-47de-aa07-670cf4f4ef8e\" (UID: \"809bb436-ed06-47de-aa07-670cf4f4ef8e\") " Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.169799 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7570ae92-0828-4f93-9926-cbc4821b37f8-logs" (OuterVolumeSpecName: "logs") pod "7570ae92-0828-4f93-9926-cbc4821b37f8" (UID: "7570ae92-0828-4f93-9926-cbc4821b37f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.174275 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-config-data" (OuterVolumeSpecName: "config-data") pod "7570ae92-0828-4f93-9926-cbc4821b37f8" (UID: "7570ae92-0828-4f93-9926-cbc4821b37f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.179988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-scripts" (OuterVolumeSpecName: "scripts") pod "87fe795c-be20-481f-bbb4-142eb6642b99" (UID: "87fe795c-be20-481f-bbb4-142eb6642b99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.180501 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fe795c-be20-481f-bbb4-142eb6642b99-logs" (OuterVolumeSpecName: "logs") pod "87fe795c-be20-481f-bbb4-142eb6642b99" (UID: "87fe795c-be20-481f-bbb4-142eb6642b99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.180815 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-scripts" (OuterVolumeSpecName: "scripts") pod "7570ae92-0828-4f93-9926-cbc4821b37f8" (UID: "7570ae92-0828-4f93-9926-cbc4821b37f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.181324 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-config-data" (OuterVolumeSpecName: "config-data") pod "87fe795c-be20-481f-bbb4-142eb6642b99" (UID: "87fe795c-be20-481f-bbb4-142eb6642b99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.182418 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fe795c-be20-481f-bbb4-142eb6642b99-kube-api-access-f4rms" (OuterVolumeSpecName: "kube-api-access-f4rms") pod "87fe795c-be20-481f-bbb4-142eb6642b99" (UID: "87fe795c-be20-481f-bbb4-142eb6642b99"). InnerVolumeSpecName "kube-api-access-f4rms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.182520 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7570ae92-0828-4f93-9926-cbc4821b37f8-kube-api-access-nrt5f" (OuterVolumeSpecName: "kube-api-access-nrt5f") pod "7570ae92-0828-4f93-9926-cbc4821b37f8" (UID: "7570ae92-0828-4f93-9926-cbc4821b37f8"). InnerVolumeSpecName "kube-api-access-nrt5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.184789 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809bb436-ed06-47de-aa07-670cf4f4ef8e-kube-api-access-f979k" (OuterVolumeSpecName: "kube-api-access-f979k") pod "809bb436-ed06-47de-aa07-670cf4f4ef8e" (UID: "809bb436-ed06-47de-aa07-670cf4f4ef8e"). InnerVolumeSpecName "kube-api-access-f979k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.185837 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7570ae92-0828-4f93-9926-cbc4821b37f8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7570ae92-0828-4f93-9926-cbc4821b37f8" (UID: "7570ae92-0828-4f93-9926-cbc4821b37f8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.195369 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87fe795c-be20-481f-bbb4-142eb6642b99-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "87fe795c-be20-481f-bbb4-142eb6642b99" (UID: "87fe795c-be20-481f-bbb4-142eb6642b99"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.211158 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-kthqk"] Feb 02 09:16:13 crc kubenswrapper[4720]: E0202 09:16:13.211695 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="init" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.211711 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="init" Feb 02 09:16:13 crc kubenswrapper[4720]: E0202 09:16:13.211723 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.211730 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.212046 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.214235 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.234626 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-kthqk"] Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278214 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkmpd\" (UniqueName: \"kubernetes.io/projected/79be7bd6-1571-415f-b5ef-f481ab24089b-kube-api-access-lkmpd\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278292 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-config\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278420 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87fe795c-be20-481f-bbb4-142eb6642b99-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278431 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/87fe795c-be20-481f-bbb4-142eb6642b99-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278440 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/7570ae92-0828-4f93-9926-cbc4821b37f8-kube-api-access-nrt5f\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278448 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7570ae92-0828-4f93-9926-cbc4821b37f8-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278456 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278463 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87fe795c-be20-481f-bbb4-142eb6642b99-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278471 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4rms\" (UniqueName: \"kubernetes.io/projected/87fe795c-be20-481f-bbb4-142eb6642b99-kube-api-access-f4rms\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278480 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f979k\" (UniqueName: \"kubernetes.io/projected/809bb436-ed06-47de-aa07-670cf4f4ef8e-kube-api-access-f979k\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278490 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7570ae92-0828-4f93-9926-cbc4821b37f8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278498 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.278508 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7570ae92-0828-4f93-9926-cbc4821b37f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.281797 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "809bb436-ed06-47de-aa07-670cf4f4ef8e" (UID: "809bb436-ed06-47de-aa07-670cf4f4ef8e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.291501 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8565544576-78c6h"] Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.293467 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.294673 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "809bb436-ed06-47de-aa07-670cf4f4ef8e" (UID: "809bb436-ed06-47de-aa07-670cf4f4ef8e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.297545 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bbcg9" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.297734 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.298279 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.298309 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.300739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "809bb436-ed06-47de-aa07-670cf4f4ef8e" (UID: "809bb436-ed06-47de-aa07-670cf4f4ef8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.317340 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-config" (OuterVolumeSpecName: "config") pod "809bb436-ed06-47de-aa07-670cf4f4ef8e" (UID: "809bb436-ed06-47de-aa07-670cf4f4ef8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.321321 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8565544576-78c6h"] Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.338457 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "809bb436-ed06-47de-aa07-670cf4f4ef8e" (UID: "809bb436-ed06-47de-aa07-670cf4f4ef8e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380257 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-combined-ca-bundle\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380516 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkmpd\" (UniqueName: \"kubernetes.io/projected/79be7bd6-1571-415f-b5ef-f481ab24089b-kube-api-access-lkmpd\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380618 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-ovndb-tls-certs\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380755 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-config\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380819 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn59z\" (UniqueName: \"kubernetes.io/projected/41b88e2b-4fc1-4047-b366-d5e8571a4c89-kube-api-access-pn59z\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.380960 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381053 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-httpd-config\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381129 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-config\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381283 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381385 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381440 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381494 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381547 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.381598 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/809bb436-ed06-47de-aa07-670cf4f4ef8e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.382395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.383400 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.383948 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.384571 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.382635 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-config\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.399269 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkmpd\" (UniqueName: \"kubernetes.io/projected/79be7bd6-1571-415f-b5ef-f481ab24089b-kube-api-access-lkmpd\") pod \"dnsmasq-dns-84b966f6c9-kthqk\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.483474 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-ovndb-tls-certs\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.483540 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn59z\" (UniqueName: \"kubernetes.io/projected/41b88e2b-4fc1-4047-b366-d5e8571a4c89-kube-api-access-pn59z\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.483612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-httpd-config\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.483643 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-config\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.483666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-combined-ca-bundle\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.487249 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-ovndb-tls-certs\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.487792 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-httpd-config\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.488810 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-config\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.491187 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-combined-ca-bundle\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.498740 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn59z\" (UniqueName: \"kubernetes.io/projected/41b88e2b-4fc1-4047-b366-d5e8571a4c89-kube-api-access-pn59z\") pod \"neutron-8565544576-78c6h\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.599477 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.628008 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.975248 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dfc4548bf-fnggh" event={"ID":"87fe795c-be20-481f-bbb4-142eb6642b99","Type":"ContainerDied","Data":"fa31ae5a8fd13035a5ead1021d75abe6661db64d06ec71816982cf21b9b21429"} Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.975341 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dfc4548bf-fnggh" Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.991746 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" event={"ID":"809bb436-ed06-47de-aa07-670cf4f4ef8e","Type":"ContainerDied","Data":"187fba6a246082bbeb0878aa5d0515881b5a902e042687b6d668c77ac3cdffb8"} Feb 02 09:16:13 crc kubenswrapper[4720]: I0202 09:16:13.991833 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.002017 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58bf7c5879-fzhzx" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.003060 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58bf7c5879-fzhzx" event={"ID":"7570ae92-0828-4f93-9926-cbc4821b37f8","Type":"ContainerDied","Data":"4df0c4992e8ec0b00fd836fbaae94be8666525d14ccd4e1f143e64810bf2762e"} Feb 02 09:16:14 crc kubenswrapper[4720]: E0202 09:16:14.010898 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-jspmg" podUID="1a624e5d-098a-44e1-95b7-fa398979891a" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.045210 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-rrrtl"] Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.056704 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-rrrtl"] Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.081834 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dfc4548bf-fnggh"] Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.089534 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dfc4548bf-fnggh"] Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.117014 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58bf7c5879-fzhzx"] Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.125752 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58bf7c5879-fzhzx"] Feb 02 09:16:14 crc kubenswrapper[4720]: E0202 09:16:14.505807 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 09:16:14 crc kubenswrapper[4720]: E0202 09:16:14.506065 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gdcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-mcqm2_openstack(691b5691-2178-4f8e-a40c-7dfe5bec0f1b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:16:14 crc kubenswrapper[4720]: E0202 09:16:14.507455 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-mcqm2" podUID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.516833 4720 scope.go:117] "RemoveContainer" containerID="c1a24f3b7fe8ee08dc815516a408f5fb9ced2b6dd84c60f8d53b92b7606b4ba1" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.715725 4720 scope.go:117] "RemoveContainer" containerID="0692d394595b152e48d284292ca424cd359bcca08ed95936502e54fa94a7b95a" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.880630 4720 scope.go:117] "RemoveContainer" containerID="56fee2f47f14e6d050aa50916d9d1b1955923623eb004091554d62cd8127d681" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.915720 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7570ae92-0828-4f93-9926-cbc4821b37f8" path="/var/lib/kubelet/pods/7570ae92-0828-4f93-9926-cbc4821b37f8/volumes" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.916289 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" path="/var/lib/kubelet/pods/809bb436-ed06-47de-aa07-670cf4f4ef8e/volumes" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.917294 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fe795c-be20-481f-bbb4-142eb6642b99" path="/var/lib/kubelet/pods/87fe795c-be20-481f-bbb4-142eb6642b99/volumes" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.938041 4720 scope.go:117] "RemoveContainer" containerID="447230fc15143f275a6a6aeff27501d7c7982f69bf7a4dbac26da477afe8e8db" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.979615 4720 scope.go:117] "RemoveContainer" containerID="7f8c40d76efd71f7ea248aa58cef978ceede578f2d9c1339892efc562f7303be" Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.986362 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:16:14 crc kubenswrapper[4720]: I0202 09:16:14.995463 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b896b6bb4-gxblv"] Feb 02 09:16:15 crc kubenswrapper[4720]: W0202 09:16:15.006622 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ea3e29_f479_4d19_9200_476ab329c100.slice/crio-8a2a69580026bab7ef2ae60850efb652a7b5dc1ebfc0d1a560b743bc40b9adbc WatchSource:0}: Error finding container 8a2a69580026bab7ef2ae60850efb652a7b5dc1ebfc0d1a560b743bc40b9adbc: Status 404 returned error can't find the container with id 8a2a69580026bab7ef2ae60850efb652a7b5dc1ebfc0d1a560b743bc40b9adbc Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.033751 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9h6k" event={"ID":"3cf88a12-cd68-4b5c-a7b1-ad649a75791e","Type":"ContainerStarted","Data":"70a32d6ceb128dad3d55b3f12ea7cf9655b112a67fe2977c604c626098348aaa"} Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.052787 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerStarted","Data":"a998e4cbb2fa3bb6c533bcc701c44070e3c879ba38cd8a4b6b970daaeb39c7ae"} Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.062154 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f77897559-wqg4q"] Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.063634 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.066311 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j9h6k" podStartSLOduration=5.205605143 podStartE2EDuration="34.066287031s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="2026-02-02 09:15:43.51132688 +0000 UTC m=+1177.366952436" lastFinishedPulling="2026-02-02 09:16:12.372008768 +0000 UTC m=+1206.227634324" observedRunningTime="2026-02-02 09:16:15.058489456 +0000 UTC m=+1208.914115012" watchObservedRunningTime="2026-02-02 09:16:15.066287031 +0000 UTC m=+1208.921912587" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.067819 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.068090 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 09:16:15 crc kubenswrapper[4720]: E0202 09:16:15.070246 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-mcqm2" podUID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.081377 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f77897559-wqg4q"] Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.110518 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-public-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.112437 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdlb\" (UniqueName: \"kubernetes.io/projected/fb096f16-61ca-432b-bc1a-42d9a4e12031-kube-api-access-2jdlb\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.112463 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-ovndb-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.112508 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-internal-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.112606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-config\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.112657 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-combined-ca-bundle\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.112709 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-httpd-config\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.173815 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86d4c4b4d8-gbbkh"] Feb 02 09:16:15 crc kubenswrapper[4720]: W0202 09:16:15.178340 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c4ce7a3_3e40_463d_b5f9_95b3352960f2.slice/crio-a5a35128e0b2e3258f548d89a6c5b76d3741aee34363411e2c511842dda39933 WatchSource:0}: Error finding container a5a35128e0b2e3258f548d89a6c5b76d3741aee34363411e2c511842dda39933: Status 404 returned error can't find the container with id a5a35128e0b2e3258f548d89a6c5b76d3741aee34363411e2c511842dda39933 Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.213937 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-combined-ca-bundle\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.214026 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-httpd-config\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.214064 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-public-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.214105 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jdlb\" (UniqueName: \"kubernetes.io/projected/fb096f16-61ca-432b-bc1a-42d9a4e12031-kube-api-access-2jdlb\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.214132 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-ovndb-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.214176 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-internal-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.214245 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-config\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.222097 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-ovndb-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.225068 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-internal-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.225257 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-config\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.227373 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-httpd-config\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.229229 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-public-tls-certs\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.229509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-combined-ca-bundle\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.230800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jdlb\" (UniqueName: \"kubernetes.io/projected/fb096f16-61ca-432b-bc1a-42d9a4e12031-kube-api-access-2jdlb\") pod \"neutron-6f77897559-wqg4q\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.252783 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mz5n2"] Feb 02 09:16:15 crc kubenswrapper[4720]: W0202 09:16:15.256197 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3af89e_0227_4cd5_a546_b9ef7ec514a7.slice/crio-823c3da66b1b4adcb660c35313cb6e73163e7438ad12697eb1a513692f30fe8e WatchSource:0}: Error finding container 823c3da66b1b4adcb660c35313cb6e73163e7438ad12697eb1a513692f30fe8e: Status 404 returned error can't find the container with id 823c3da66b1b4adcb660c35313cb6e73163e7438ad12697eb1a513692f30fe8e Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.262853 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.334678 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:16:15 crc kubenswrapper[4720]: W0202 09:16:15.348011 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fbf42a7_347e_4355_afc5_4e70bbf58271.slice/crio-d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e WatchSource:0}: Error finding container d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e: Status 404 returned error can't find the container with id d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.405920 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.485724 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-kthqk"] Feb 02 09:16:15 crc kubenswrapper[4720]: W0202 09:16:15.491012 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79be7bd6_1571_415f_b5ef_f481ab24089b.slice/crio-9ec2452d9a48dec826ed475416969145a4d3f41a817e2f366a8ae2c7439c61dc WatchSource:0}: Error finding container 9ec2452d9a48dec826ed475416969145a4d3f41a817e2f366a8ae2c7439c61dc: Status 404 returned error can't find the container with id 9ec2452d9a48dec826ed475416969145a4d3f41a817e2f366a8ae2c7439c61dc Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.567404 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8565544576-78c6h"] Feb 02 09:16:15 crc kubenswrapper[4720]: W0202 09:16:15.615453 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b88e2b_4fc1_4047_b366_d5e8571a4c89.slice/crio-3ad57bb47beba59b1316a2ef5142a21ebf9af947ff5e4d3da2a3d43619e37cf7 WatchSource:0}: Error finding container 3ad57bb47beba59b1316a2ef5142a21ebf9af947ff5e4d3da2a3d43619e37cf7: Status 404 returned error can't find the container with id 3ad57bb47beba59b1316a2ef5142a21ebf9af947ff5e4d3da2a3d43619e37cf7 Feb 02 09:16:15 crc kubenswrapper[4720]: I0202 09:16:15.645336 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-rrrtl" podUID="809bb436-ed06-47de-aa07-670cf4f4ef8e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.002338 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f77897559-wqg4q"] Feb 02 09:16:16 crc kubenswrapper[4720]: W0202 09:16:16.012621 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb096f16_61ca_432b_bc1a_42d9a4e12031.slice/crio-3ae08af203b05a9f982704158aa047521f7cae0c62e64f5ab2734e1bbe4b91a1 WatchSource:0}: Error finding container 3ae08af203b05a9f982704158aa047521f7cae0c62e64f5ab2734e1bbe4b91a1: Status 404 returned error can't find the container with id 3ae08af203b05a9f982704158aa047521f7cae0c62e64f5ab2734e1bbe4b91a1 Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.087470 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b896b6bb4-gxblv" event={"ID":"e7ea3e29-f479-4d19-9200-476ab329c100","Type":"ContainerStarted","Data":"7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.087853 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b896b6bb4-gxblv" event={"ID":"e7ea3e29-f479-4d19-9200-476ab329c100","Type":"ContainerStarted","Data":"8a2a69580026bab7ef2ae60850efb652a7b5dc1ebfc0d1a560b743bc40b9adbc"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.091267 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86d4c4b4d8-gbbkh" event={"ID":"8c4ce7a3-3e40-463d-b5f9-95b3352960f2","Type":"ContainerStarted","Data":"143ad2aaf530b590ceeb1d58fdfefb25496050332e516160cecbb34f3e8257bc"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.091323 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86d4c4b4d8-gbbkh" event={"ID":"8c4ce7a3-3e40-463d-b5f9-95b3352960f2","Type":"ContainerStarted","Data":"a5a35128e0b2e3258f548d89a6c5b76d3741aee34363411e2c511842dda39933"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.094927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8565544576-78c6h" event={"ID":"41b88e2b-4fc1-4047-b366-d5e8571a4c89","Type":"ContainerStarted","Data":"184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.094961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8565544576-78c6h" event={"ID":"41b88e2b-4fc1-4047-b366-d5e8571a4c89","Type":"ContainerStarted","Data":"3ad57bb47beba59b1316a2ef5142a21ebf9af947ff5e4d3da2a3d43619e37cf7"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.097140 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0fbf42a7-347e-4355-afc5-4e70bbf58271","Type":"ContainerStarted","Data":"dea98d1476ace63d0b9e8c01c5088ee7147c9d41651dd519173f31d3ed05e345"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.097202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0fbf42a7-347e-4355-afc5-4e70bbf58271","Type":"ContainerStarted","Data":"d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.108758 4720 generic.go:334] "Generic (PLEG): container finished" podID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerID="aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31" exitCode=0 Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.108836 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" event={"ID":"79be7bd6-1571-415f-b5ef-f481ab24089b","Type":"ContainerDied","Data":"aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.108862 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" event={"ID":"79be7bd6-1571-415f-b5ef-f481ab24089b","Type":"ContainerStarted","Data":"9ec2452d9a48dec826ed475416969145a4d3f41a817e2f366a8ae2c7439c61dc"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.116436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f77897559-wqg4q" event={"ID":"fb096f16-61ca-432b-bc1a-42d9a4e12031","Type":"ContainerStarted","Data":"3ae08af203b05a9f982704158aa047521f7cae0c62e64f5ab2734e1bbe4b91a1"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.125206 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mz5n2" event={"ID":"db3af89e-0227-4cd5-a546-b9ef7ec514a7","Type":"ContainerStarted","Data":"6d4b8f1c0d49159ae3c41a2922382a62d824228a29cd43542a7fd3c874b68fb5"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.125244 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mz5n2" event={"ID":"db3af89e-0227-4cd5-a546-b9ef7ec514a7","Type":"ContainerStarted","Data":"823c3da66b1b4adcb660c35313cb6e73163e7438ad12697eb1a513692f30fe8e"} Feb 02 09:16:16 crc kubenswrapper[4720]: I0202 09:16:16.178552 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mz5n2" podStartSLOduration=12.178529566 podStartE2EDuration="12.178529566s" podCreationTimestamp="2026-02-02 09:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:16.149266401 +0000 UTC m=+1210.004891957" watchObservedRunningTime="2026-02-02 09:16:16.178529566 +0000 UTC m=+1210.034155132" Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.064484 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.135041 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8565544576-78c6h" event={"ID":"41b88e2b-4fc1-4047-b366-d5e8571a4c89","Type":"ContainerStarted","Data":"6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d"} Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.135507 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.137320 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0fbf42a7-347e-4355-afc5-4e70bbf58271","Type":"ContainerStarted","Data":"4f088e68bb9136fb2756c3d0dd16bf845f09f5d0ccc55ea9b1e7e142542bb1a1"} Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.140086 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" event={"ID":"79be7bd6-1571-415f-b5ef-f481ab24089b","Type":"ContainerStarted","Data":"89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef"} Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.140518 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.143693 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f77897559-wqg4q" event={"ID":"fb096f16-61ca-432b-bc1a-42d9a4e12031","Type":"ContainerStarted","Data":"c981fe18a1377ea6a437d45330dc0b7963cc7073d1d2b4a26b838e1dbc6a75e1"} Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.160905 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8565544576-78c6h" podStartSLOduration=4.160860265 podStartE2EDuration="4.160860265s" podCreationTimestamp="2026-02-02 09:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:17.155221279 +0000 UTC m=+1211.010846855" watchObservedRunningTime="2026-02-02 09:16:17.160860265 +0000 UTC m=+1211.016485821" Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.171416 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.171402291 podStartE2EDuration="5.171402291s" podCreationTimestamp="2026-02-02 09:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:17.170399149 +0000 UTC m=+1211.026024705" watchObservedRunningTime="2026-02-02 09:16:17.171402291 +0000 UTC m=+1211.027027847" Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.198022 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" podStartSLOduration=4.198004256 podStartE2EDuration="4.198004256s" podCreationTimestamp="2026-02-02 09:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:17.189201909 +0000 UTC m=+1211.044827475" watchObservedRunningTime="2026-02-02 09:16:17.198004256 +0000 UTC m=+1211.053629802" Feb 02 09:16:17 crc kubenswrapper[4720]: W0202 09:16:17.254146 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687b9563_476f_485d_bfd5_8f874470c4f2.slice/crio-3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371 WatchSource:0}: Error finding container 3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371: Status 404 returned error can't find the container with id 3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371 Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.902513 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:16:17 crc kubenswrapper[4720]: I0202 09:16:17.902802 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.166036 4720 generic.go:334] "Generic (PLEG): container finished" podID="3cf88a12-cd68-4b5c-a7b1-ad649a75791e" containerID="70a32d6ceb128dad3d55b3f12ea7cf9655b112a67fe2977c604c626098348aaa" exitCode=0 Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.166159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9h6k" event={"ID":"3cf88a12-cd68-4b5c-a7b1-ad649a75791e","Type":"ContainerDied","Data":"70a32d6ceb128dad3d55b3f12ea7cf9655b112a67fe2977c604c626098348aaa"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.170573 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b896b6bb4-gxblv" event={"ID":"e7ea3e29-f479-4d19-9200-476ab329c100","Type":"ContainerStarted","Data":"cfa0d360cae26b0c2a8dc8dbb5704822a8b671b94feadfbd85eda5647e826c27"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.175698 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86d4c4b4d8-gbbkh" event={"ID":"8c4ce7a3-3e40-463d-b5f9-95b3352960f2","Type":"ContainerStarted","Data":"91e389a7c3e33b367e012ff7d55bdf37c6834632ba31169e7dd62464ae3f2c28"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.197921 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerStarted","Data":"8a29434f0aef5c00f3dd1e472b797256aa5382e06d2a3c6743df8527264ea631"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.206620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"687b9563-476f-485d-bfd5-8f874470c4f2","Type":"ContainerStarted","Data":"86e0b7986f32cfae7cdb6729b14924c5af900de8c80481e6758d2d7f4aad991c"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.206662 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"687b9563-476f-485d-bfd5-8f874470c4f2","Type":"ContainerStarted","Data":"3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.210123 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f77897559-wqg4q" event={"ID":"fb096f16-61ca-432b-bc1a-42d9a4e12031","Type":"ContainerStarted","Data":"4449717031a7014cf359eccae2c6706c30073c4d22cfcdfa9c79f07e6435855b"} Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.210175 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.228055 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86d4c4b4d8-gbbkh" podStartSLOduration=24.735985763 podStartE2EDuration="25.228041163s" podCreationTimestamp="2026-02-02 09:15:53 +0000 UTC" firstStartedPulling="2026-02-02 09:16:15.180339432 +0000 UTC m=+1209.035964988" lastFinishedPulling="2026-02-02 09:16:15.672394832 +0000 UTC m=+1209.528020388" observedRunningTime="2026-02-02 09:16:18.226853746 +0000 UTC m=+1212.082479302" watchObservedRunningTime="2026-02-02 09:16:18.228041163 +0000 UTC m=+1212.083666709" Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.253916 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b896b6bb4-gxblv" podStartSLOduration=24.645400496 podStartE2EDuration="25.253876401s" podCreationTimestamp="2026-02-02 09:15:53 +0000 UTC" firstStartedPulling="2026-02-02 09:16:15.028627748 +0000 UTC m=+1208.884253304" lastFinishedPulling="2026-02-02 09:16:15.637103653 +0000 UTC m=+1209.492729209" observedRunningTime="2026-02-02 09:16:18.247326894 +0000 UTC m=+1212.102952450" watchObservedRunningTime="2026-02-02 09:16:18.253876401 +0000 UTC m=+1212.109501957" Feb 02 09:16:18 crc kubenswrapper[4720]: I0202 09:16:18.268842 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f77897559-wqg4q" podStartSLOduration=3.268824145 podStartE2EDuration="3.268824145s" podCreationTimestamp="2026-02-02 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:18.265991402 +0000 UTC m=+1212.121616968" watchObservedRunningTime="2026-02-02 09:16:18.268824145 +0000 UTC m=+1212.124449701" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.229643 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"687b9563-476f-485d-bfd5-8f874470c4f2","Type":"ContainerStarted","Data":"0063bc7e32cc8c9fe202eaea552f1c6222a45bbe87ed28e737dce52606ee371e"} Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.233334 4720 generic.go:334] "Generic (PLEG): container finished" podID="db3af89e-0227-4cd5-a546-b9ef7ec514a7" containerID="6d4b8f1c0d49159ae3c41a2922382a62d824228a29cd43542a7fd3c874b68fb5" exitCode=0 Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.233438 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mz5n2" event={"ID":"db3af89e-0227-4cd5-a546-b9ef7ec514a7","Type":"ContainerDied","Data":"6d4b8f1c0d49159ae3c41a2922382a62d824228a29cd43542a7fd3c874b68fb5"} Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.255569 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.255551043 podStartE2EDuration="7.255551043s" podCreationTimestamp="2026-02-02 09:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:19.250910289 +0000 UTC m=+1213.106535845" watchObservedRunningTime="2026-02-02 09:16:19.255551043 +0000 UTC m=+1213.111176599" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.637148 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9h6k" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.807116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-logs\") pod \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.807401 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-config-data\") pod \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.807457 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-scripts\") pod \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.807541 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w42rp\" (UniqueName: \"kubernetes.io/projected/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-kube-api-access-w42rp\") pod \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.807631 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-combined-ca-bundle\") pod \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\" (UID: \"3cf88a12-cd68-4b5c-a7b1-ad649a75791e\") " Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.813435 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-logs" (OuterVolumeSpecName: "logs") pod "3cf88a12-cd68-4b5c-a7b1-ad649a75791e" (UID: "3cf88a12-cd68-4b5c-a7b1-ad649a75791e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.816430 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-kube-api-access-w42rp" (OuterVolumeSpecName: "kube-api-access-w42rp") pod "3cf88a12-cd68-4b5c-a7b1-ad649a75791e" (UID: "3cf88a12-cd68-4b5c-a7b1-ad649a75791e"). InnerVolumeSpecName "kube-api-access-w42rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.831455 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-scripts" (OuterVolumeSpecName: "scripts") pod "3cf88a12-cd68-4b5c-a7b1-ad649a75791e" (UID: "3cf88a12-cd68-4b5c-a7b1-ad649a75791e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.851055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-config-data" (OuterVolumeSpecName: "config-data") pod "3cf88a12-cd68-4b5c-a7b1-ad649a75791e" (UID: "3cf88a12-cd68-4b5c-a7b1-ad649a75791e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.852225 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cf88a12-cd68-4b5c-a7b1-ad649a75791e" (UID: "3cf88a12-cd68-4b5c-a7b1-ad649a75791e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.909020 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w42rp\" (UniqueName: \"kubernetes.io/projected/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-kube-api-access-w42rp\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.909048 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.909061 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.909071 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:19 crc kubenswrapper[4720]: I0202 09:16:19.909080 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cf88a12-cd68-4b5c-a7b1-ad649a75791e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.255663 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9h6k" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.255564 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9h6k" event={"ID":"3cf88a12-cd68-4b5c-a7b1-ad649a75791e","Type":"ContainerDied","Data":"98db2235ccfd2c5839d388f20eb0332dfa4f83a0a7e73ab2f68b0d99f2af972a"} Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.255849 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98db2235ccfd2c5839d388f20eb0332dfa4f83a0a7e73ab2f68b0d99f2af972a" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.306995 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6959d6dc4b-9m4m5"] Feb 02 09:16:20 crc kubenswrapper[4720]: E0202 09:16:20.308886 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf88a12-cd68-4b5c-a7b1-ad649a75791e" containerName="placement-db-sync" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.308986 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf88a12-cd68-4b5c-a7b1-ad649a75791e" containerName="placement-db-sync" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.310006 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf88a12-cd68-4b5c-a7b1-ad649a75791e" containerName="placement-db-sync" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.311829 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.316215 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.316464 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.316574 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.316680 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cc7fb" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.316831 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.316931 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6959d6dc4b-9m4m5"] Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.427813 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-config-data\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.427856 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-internal-tls-certs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.427891 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwk5\" (UniqueName: \"kubernetes.io/projected/b103c33c-ef07-41f8-b969-e665fc7bedf1-kube-api-access-rcwk5\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.427931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-scripts\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.427949 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103c33c-ef07-41f8-b969-e665fc7bedf1-logs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.427967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-public-tls-certs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.428015 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-combined-ca-bundle\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530174 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-config-data\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-internal-tls-certs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530246 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwk5\" (UniqueName: \"kubernetes.io/projected/b103c33c-ef07-41f8-b969-e665fc7bedf1-kube-api-access-rcwk5\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530275 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-scripts\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530295 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103c33c-ef07-41f8-b969-e665fc7bedf1-logs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530314 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-public-tls-certs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.530363 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-combined-ca-bundle\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.533732 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103c33c-ef07-41f8-b969-e665fc7bedf1-logs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.537704 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-combined-ca-bundle\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.544739 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-internal-tls-certs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.546709 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-config-data\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.546860 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-scripts\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.546865 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-public-tls-certs\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.548780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwk5\" (UniqueName: \"kubernetes.io/projected/b103c33c-ef07-41f8-b969-e665fc7bedf1-kube-api-access-rcwk5\") pod \"placement-6959d6dc4b-9m4m5\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:20 crc kubenswrapper[4720]: I0202 09:16:20.667249 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.277803 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mz5n2" event={"ID":"db3af89e-0227-4cd5-a546-b9ef7ec514a7","Type":"ContainerDied","Data":"823c3da66b1b4adcb660c35313cb6e73163e7438ad12697eb1a513692f30fe8e"} Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.278111 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823c3da66b1b4adcb660c35313cb6e73163e7438ad12697eb1a513692f30fe8e" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.310232 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.464399 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-scripts\") pod \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.464467 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgn5x\" (UniqueName: \"kubernetes.io/projected/db3af89e-0227-4cd5-a546-b9ef7ec514a7-kube-api-access-tgn5x\") pod \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.464495 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-combined-ca-bundle\") pod \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.464533 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-fernet-keys\") pod \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.464628 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-config-data\") pod \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.464654 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-credential-keys\") pod \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\" (UID: \"db3af89e-0227-4cd5-a546-b9ef7ec514a7\") " Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.479617 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "db3af89e-0227-4cd5-a546-b9ef7ec514a7" (UID: "db3af89e-0227-4cd5-a546-b9ef7ec514a7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.485969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3af89e-0227-4cd5-a546-b9ef7ec514a7-kube-api-access-tgn5x" (OuterVolumeSpecName: "kube-api-access-tgn5x") pod "db3af89e-0227-4cd5-a546-b9ef7ec514a7" (UID: "db3af89e-0227-4cd5-a546-b9ef7ec514a7"). InnerVolumeSpecName "kube-api-access-tgn5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.509799 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db3af89e-0227-4cd5-a546-b9ef7ec514a7" (UID: "db3af89e-0227-4cd5-a546-b9ef7ec514a7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.519183 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-scripts" (OuterVolumeSpecName: "scripts") pod "db3af89e-0227-4cd5-a546-b9ef7ec514a7" (UID: "db3af89e-0227-4cd5-a546-b9ef7ec514a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.551065 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db3af89e-0227-4cd5-a546-b9ef7ec514a7" (UID: "db3af89e-0227-4cd5-a546-b9ef7ec514a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.557990 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-config-data" (OuterVolumeSpecName: "config-data") pod "db3af89e-0227-4cd5-a546-b9ef7ec514a7" (UID: "db3af89e-0227-4cd5-a546-b9ef7ec514a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.569979 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.570017 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgn5x\" (UniqueName: \"kubernetes.io/projected/db3af89e-0227-4cd5-a546-b9ef7ec514a7-kube-api-access-tgn5x\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.570030 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.570040 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.570049 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.570057 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db3af89e-0227-4cd5-a546-b9ef7ec514a7-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.693777 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.694030 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.703509 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.703650 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.725487 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.738105 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.755053 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:22 crc kubenswrapper[4720]: I0202 09:16:22.759844 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.287319 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mz5n2" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.287979 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.288952 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.289117 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.289284 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.409047 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57f5dcffbd-gvpfb"] Feb 02 09:16:23 crc kubenswrapper[4720]: E0202 09:16:23.409419 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3af89e-0227-4cd5-a546-b9ef7ec514a7" containerName="keystone-bootstrap" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.409432 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3af89e-0227-4cd5-a546-b9ef7ec514a7" containerName="keystone-bootstrap" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.409603 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3af89e-0227-4cd5-a546-b9ef7ec514a7" containerName="keystone-bootstrap" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.410174 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.414129 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jsbgm" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.421280 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57f5dcffbd-gvpfb"] Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.444740 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.447135 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.447353 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.447493 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.449101 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.487870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-public-tls-certs\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.487923 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwt7c\" (UniqueName: \"kubernetes.io/projected/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-kube-api-access-dwt7c\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.487955 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-internal-tls-certs\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.487995 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-credential-keys\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.488016 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-scripts\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.488049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-config-data\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.488076 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-fernet-keys\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.488162 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-combined-ca-bundle\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589533 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-credential-keys\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589581 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-scripts\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589625 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-config-data\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589657 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-fernet-keys\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-combined-ca-bundle\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589731 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-public-tls-certs\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589751 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwt7c\" (UniqueName: \"kubernetes.io/projected/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-kube-api-access-dwt7c\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.589779 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-internal-tls-certs\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.595226 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-scripts\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.598246 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-public-tls-certs\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.601023 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.602132 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-fernet-keys\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.614658 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-internal-tls-certs\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.615400 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwt7c\" (UniqueName: \"kubernetes.io/projected/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-kube-api-access-dwt7c\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.620322 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-combined-ca-bundle\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.627556 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-config-data\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.627922 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec11bd2b-cee2-413f-9a50-a27d03a27fd8-credential-keys\") pod \"keystone-57f5dcffbd-gvpfb\" (UID: \"ec11bd2b-cee2-413f-9a50-a27d03a27fd8\") " pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.689008 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.689952 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.694628 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-4zcd4"] Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.694860 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" podUID="fca0fb47-7f62-4319-9852-c883684729e7" containerName="dnsmasq-dns" containerID="cri-o://861aa861dc0ff938c3fd4f3325f0e6ff78768019a8684598dea31b2ca7b19bba" gracePeriod=10 Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.782781 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.898644 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:16:23 crc kubenswrapper[4720]: I0202 09:16:23.898691 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:16:24 crc kubenswrapper[4720]: I0202 09:16:24.305949 4720 generic.go:334] "Generic (PLEG): container finished" podID="fca0fb47-7f62-4319-9852-c883684729e7" containerID="861aa861dc0ff938c3fd4f3325f0e6ff78768019a8684598dea31b2ca7b19bba" exitCode=0 Feb 02 09:16:24 crc kubenswrapper[4720]: I0202 09:16:24.306073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" event={"ID":"fca0fb47-7f62-4319-9852-c883684729e7","Type":"ContainerDied","Data":"861aa861dc0ff938c3fd4f3325f0e6ff78768019a8684598dea31b2ca7b19bba"} Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.311902 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.312144 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.312897 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.312912 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.752597 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.855485 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 09:16:25 crc kubenswrapper[4720]: I0202 09:16:25.991183 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.148302 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-swift-storage-0\") pod \"fca0fb47-7f62-4319-9852-c883684729e7\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.148360 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-nb\") pod \"fca0fb47-7f62-4319-9852-c883684729e7\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.148486 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-svc\") pod \"fca0fb47-7f62-4319-9852-c883684729e7\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.148540 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-sb\") pod \"fca0fb47-7f62-4319-9852-c883684729e7\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.148585 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwjt7\" (UniqueName: \"kubernetes.io/projected/fca0fb47-7f62-4319-9852-c883684729e7-kube-api-access-qwjt7\") pod \"fca0fb47-7f62-4319-9852-c883684729e7\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.148640 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-config\") pod \"fca0fb47-7f62-4319-9852-c883684729e7\" (UID: \"fca0fb47-7f62-4319-9852-c883684729e7\") " Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.161448 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca0fb47-7f62-4319-9852-c883684729e7-kube-api-access-qwjt7" (OuterVolumeSpecName: "kube-api-access-qwjt7") pod "fca0fb47-7f62-4319-9852-c883684729e7" (UID: "fca0fb47-7f62-4319-9852-c883684729e7"). InnerVolumeSpecName "kube-api-access-qwjt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.208954 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fca0fb47-7f62-4319-9852-c883684729e7" (UID: "fca0fb47-7f62-4319-9852-c883684729e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.216794 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fca0fb47-7f62-4319-9852-c883684729e7" (UID: "fca0fb47-7f62-4319-9852-c883684729e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.219368 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fca0fb47-7f62-4319-9852-c883684729e7" (UID: "fca0fb47-7f62-4319-9852-c883684729e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.220806 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fca0fb47-7f62-4319-9852-c883684729e7" (UID: "fca0fb47-7f62-4319-9852-c883684729e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.234367 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-config" (OuterVolumeSpecName: "config") pod "fca0fb47-7f62-4319-9852-c883684729e7" (UID: "fca0fb47-7f62-4319-9852-c883684729e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.254353 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.254402 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwjt7\" (UniqueName: \"kubernetes.io/projected/fca0fb47-7f62-4319-9852-c883684729e7-kube-api-access-qwjt7\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.254420 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.254433 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.254447 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.254459 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fca0fb47-7f62-4319-9852-c883684729e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.268310 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.281741 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.289053 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57f5dcffbd-gvpfb"] Feb 02 09:16:26 crc kubenswrapper[4720]: W0202 09:16:26.291066 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec11bd2b_cee2_413f_9a50_a27d03a27fd8.slice/crio-c34fe73c984f81a8c26b255c20cf976e13c1c77ccb293f182ebd85305042cd20 WatchSource:0}: Error finding container c34fe73c984f81a8c26b255c20cf976e13c1c77ccb293f182ebd85305042cd20: Status 404 returned error can't find the container with id c34fe73c984f81a8c26b255c20cf976e13c1c77ccb293f182ebd85305042cd20 Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.323322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vn2mf" event={"ID":"a1890e68-1a9c-4180-b989-6e178510e23b","Type":"ContainerStarted","Data":"b408d649ad6772f3b609eb9cb1867148f1db080f316b2797988f8b830f7273de"} Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.331583 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" event={"ID":"fca0fb47-7f62-4319-9852-c883684729e7","Type":"ContainerDied","Data":"e4f1d1cf9907425053e40f111aee078639a1691c9913c136019ad63997390c56"} Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.331631 4720 scope.go:117] "RemoveContainer" containerID="861aa861dc0ff938c3fd4f3325f0e6ff78768019a8684598dea31b2ca7b19bba" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.331749 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-4zcd4" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.334734 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57f5dcffbd-gvpfb" event={"ID":"ec11bd2b-cee2-413f-9a50-a27d03a27fd8","Type":"ContainerStarted","Data":"c34fe73c984f81a8c26b255c20cf976e13c1c77ccb293f182ebd85305042cd20"} Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.348893 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerStarted","Data":"376e9ad842a14a671e1a0e1441057b751a8dbf37dc7b59e73bc8401e27de8814"} Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.385361 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vn2mf" podStartSLOduration=3.090132839 podStartE2EDuration="45.385342477s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="2026-02-02 09:15:43.42645797 +0000 UTC m=+1177.282083526" lastFinishedPulling="2026-02-02 09:16:25.721667608 +0000 UTC m=+1219.577293164" observedRunningTime="2026-02-02 09:16:26.347236335 +0000 UTC m=+1220.202861891" watchObservedRunningTime="2026-02-02 09:16:26.385342477 +0000 UTC m=+1220.240968033" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.417039 4720 scope.go:117] "RemoveContainer" containerID="fd7339bc42b6b01336205b2038b5f1391c56d7e1e125f04a2f63678e529f8dd2" Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.417330 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-4zcd4"] Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.436426 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-4zcd4"] Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.455647 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6959d6dc4b-9m4m5"] Feb 02 09:16:26 crc kubenswrapper[4720]: I0202 09:16:26.925125 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca0fb47-7f62-4319-9852-c883684729e7" path="/var/lib/kubelet/pods/fca0fb47-7f62-4319-9852-c883684729e7/volumes" Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.364490 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57f5dcffbd-gvpfb" event={"ID":"ec11bd2b-cee2-413f-9a50-a27d03a27fd8","Type":"ContainerStarted","Data":"16b59b11ac53dfd52a2ead8d637ba3e34588fe619b305fabb7af5305ce5e9c86"} Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.365273 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.368227 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6959d6dc4b-9m4m5" event={"ID":"b103c33c-ef07-41f8-b969-e665fc7bedf1","Type":"ContainerStarted","Data":"d1a02e0c0ed1b597d39e63e523466196cfcc20178d1d7c7d1da6563051c6aeb7"} Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.368262 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6959d6dc4b-9m4m5" event={"ID":"b103c33c-ef07-41f8-b969-e665fc7bedf1","Type":"ContainerStarted","Data":"74819d035a8da316b52927667030cc4186809210f7b0bb4d280153462e8c5e32"} Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.368290 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6959d6dc4b-9m4m5" event={"ID":"b103c33c-ef07-41f8-b969-e665fc7bedf1","Type":"ContainerStarted","Data":"595a82bf1e95abb35f082946480016954302c2bc7874c41209c07766f145f2e8"} Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.368922 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.368982 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.402996 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57f5dcffbd-gvpfb" podStartSLOduration=4.402979366 podStartE2EDuration="4.402979366s" podCreationTimestamp="2026-02-02 09:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:27.391222953 +0000 UTC m=+1221.246848509" watchObservedRunningTime="2026-02-02 09:16:27.402979366 +0000 UTC m=+1221.258604922" Feb 02 09:16:27 crc kubenswrapper[4720]: I0202 09:16:27.419096 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6959d6dc4b-9m4m5" podStartSLOduration=7.419080456 podStartE2EDuration="7.419080456s" podCreationTimestamp="2026-02-02 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:27.409350689 +0000 UTC m=+1221.264976235" watchObservedRunningTime="2026-02-02 09:16:27.419080456 +0000 UTC m=+1221.274706012" Feb 02 09:16:28 crc kubenswrapper[4720]: I0202 09:16:28.383999 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jspmg" event={"ID":"1a624e5d-098a-44e1-95b7-fa398979891a","Type":"ContainerStarted","Data":"304bf2f7a1577cc0a68f62be1ea364ca057b1ff27d74958df943220cf45b8721"} Feb 02 09:16:28 crc kubenswrapper[4720]: I0202 09:16:28.408297 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-jspmg" podStartSLOduration=3.772732403 podStartE2EDuration="47.408274519s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="2026-02-02 09:15:43.716210744 +0000 UTC m=+1177.571836300" lastFinishedPulling="2026-02-02 09:16:27.35175286 +0000 UTC m=+1221.207378416" observedRunningTime="2026-02-02 09:16:28.407698096 +0000 UTC m=+1222.263323682" watchObservedRunningTime="2026-02-02 09:16:28.408274519 +0000 UTC m=+1222.263900075" Feb 02 09:16:29 crc kubenswrapper[4720]: I0202 09:16:29.395680 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcqm2" event={"ID":"691b5691-2178-4f8e-a40c-7dfe5bec0f1b","Type":"ContainerStarted","Data":"bad88ee43365b1ca57cd5723f8557d4c9b72ce9481ad29345c4ac151b6647a2c"} Feb 02 09:16:29 crc kubenswrapper[4720]: I0202 09:16:29.398016 4720 generic.go:334] "Generic (PLEG): container finished" podID="a1890e68-1a9c-4180-b989-6e178510e23b" containerID="b408d649ad6772f3b609eb9cb1867148f1db080f316b2797988f8b830f7273de" exitCode=0 Feb 02 09:16:29 crc kubenswrapper[4720]: I0202 09:16:29.398061 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vn2mf" event={"ID":"a1890e68-1a9c-4180-b989-6e178510e23b","Type":"ContainerDied","Data":"b408d649ad6772f3b609eb9cb1867148f1db080f316b2797988f8b830f7273de"} Feb 02 09:16:29 crc kubenswrapper[4720]: I0202 09:16:29.440515 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-mcqm2" podStartSLOduration=3.267165731 podStartE2EDuration="48.440496734s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="2026-02-02 09:15:43.122992071 +0000 UTC m=+1176.978617627" lastFinishedPulling="2026-02-02 09:16:28.296323064 +0000 UTC m=+1222.151948630" observedRunningTime="2026-02-02 09:16:29.418299397 +0000 UTC m=+1223.273924973" watchObservedRunningTime="2026-02-02 09:16:29.440496734 +0000 UTC m=+1223.296122290" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.601042 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.667826 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-combined-ca-bundle\") pod \"a1890e68-1a9c-4180-b989-6e178510e23b\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.668075 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-db-sync-config-data\") pod \"a1890e68-1a9c-4180-b989-6e178510e23b\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.668143 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqnn\" (UniqueName: \"kubernetes.io/projected/a1890e68-1a9c-4180-b989-6e178510e23b-kube-api-access-zpqnn\") pod \"a1890e68-1a9c-4180-b989-6e178510e23b\" (UID: \"a1890e68-1a9c-4180-b989-6e178510e23b\") " Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.673277 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a1890e68-1a9c-4180-b989-6e178510e23b" (UID: "a1890e68-1a9c-4180-b989-6e178510e23b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.673857 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1890e68-1a9c-4180-b989-6e178510e23b-kube-api-access-zpqnn" (OuterVolumeSpecName: "kube-api-access-zpqnn") pod "a1890e68-1a9c-4180-b989-6e178510e23b" (UID: "a1890e68-1a9c-4180-b989-6e178510e23b"). InnerVolumeSpecName "kube-api-access-zpqnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.688992 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.703949 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1890e68-1a9c-4180-b989-6e178510e23b" (UID: "a1890e68-1a9c-4180-b989-6e178510e23b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.770764 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.770794 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqnn\" (UniqueName: \"kubernetes.io/projected/a1890e68-1a9c-4180-b989-6e178510e23b-kube-api-access-zpqnn\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.770805 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1890e68-1a9c-4180-b989-6e178510e23b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:33 crc kubenswrapper[4720]: I0202 09:16:33.901100 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86d4c4b4d8-gbbkh" podUID="8c4ce7a3-3e40-463d-b5f9-95b3352960f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.460312 4720 generic.go:334] "Generic (PLEG): container finished" podID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" containerID="bad88ee43365b1ca57cd5723f8557d4c9b72ce9481ad29345c4ac151b6647a2c" exitCode=0 Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.460410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcqm2" event={"ID":"691b5691-2178-4f8e-a40c-7dfe5bec0f1b","Type":"ContainerDied","Data":"bad88ee43365b1ca57cd5723f8557d4c9b72ce9481ad29345c4ac151b6647a2c"} Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.462112 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vn2mf" event={"ID":"a1890e68-1a9c-4180-b989-6e178510e23b","Type":"ContainerDied","Data":"0f1e868e6b0e6cc600ef710c3184e10fab6f944ca9f06e6419af0717ae3f6734"} Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.462137 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f1e868e6b0e6cc600ef710c3184e10fab6f944ca9f06e6419af0717ae3f6734" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.462171 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vn2mf" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.841964 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-64767b4cf5-g7ntw"] Feb 02 09:16:34 crc kubenswrapper[4720]: E0202 09:16:34.842686 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca0fb47-7f62-4319-9852-c883684729e7" containerName="init" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.842702 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca0fb47-7f62-4319-9852-c883684729e7" containerName="init" Feb 02 09:16:34 crc kubenswrapper[4720]: E0202 09:16:34.842738 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1890e68-1a9c-4180-b989-6e178510e23b" containerName="barbican-db-sync" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.842745 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1890e68-1a9c-4180-b989-6e178510e23b" containerName="barbican-db-sync" Feb 02 09:16:34 crc kubenswrapper[4720]: E0202 09:16:34.842766 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca0fb47-7f62-4319-9852-c883684729e7" containerName="dnsmasq-dns" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.842773 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca0fb47-7f62-4319-9852-c883684729e7" containerName="dnsmasq-dns" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.843000 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1890e68-1a9c-4180-b989-6e178510e23b" containerName="barbican-db-sync" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.843024 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca0fb47-7f62-4319-9852-c883684729e7" containerName="dnsmasq-dns" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.854183 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.857759 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5ffcd48446-zlpmv"] Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.859198 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.862776 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.863113 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n77xk" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.863279 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.879562 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64767b4cf5-g7ntw"] Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.889739 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.908691 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5ffcd48446-zlpmv"] Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914050 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-config-data\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914135 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f17bba-bccc-4cec-92ac-20d50fe48ed8-logs\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914227 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-logs\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-config-data\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914276 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4fkj\" (UniqueName: \"kubernetes.io/projected/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-kube-api-access-g4fkj\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914296 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-combined-ca-bundle\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914343 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-config-data-custom\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914397 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn2k\" (UniqueName: \"kubernetes.io/projected/70f17bba-bccc-4cec-92ac-20d50fe48ed8-kube-api-access-4xn2k\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-config-data-custom\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.914480 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-combined-ca-bundle\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.957431 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mzp8p"] Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.959619 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:34 crc kubenswrapper[4720]: I0202 09:16:34.978509 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mzp8p"] Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016536 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-logs\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016586 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-config-data\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4fkj\" (UniqueName: \"kubernetes.io/projected/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-kube-api-access-g4fkj\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-combined-ca-bundle\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016673 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-config-data-custom\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016699 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016741 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xn2k\" (UniqueName: \"kubernetes.io/projected/70f17bba-bccc-4cec-92ac-20d50fe48ed8-kube-api-access-4xn2k\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016771 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016791 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-config-data-custom\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016814 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbg8\" (UniqueName: \"kubernetes.io/projected/21ca6373-245c-4687-92af-516683b180f5-kube-api-access-4lbg8\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016830 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-combined-ca-bundle\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016849 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-config\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016899 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-config-data\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016937 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f17bba-bccc-4cec-92ac-20d50fe48ed8-logs\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.016951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.017078 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-logs\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.018037 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f17bba-bccc-4cec-92ac-20d50fe48ed8-logs\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.025499 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-config-data\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.028038 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-combined-ca-bundle\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.028543 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-combined-ca-bundle\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.029157 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-config-data-custom\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.031468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-config-data-custom\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.044024 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f17bba-bccc-4cec-92ac-20d50fe48ed8-config-data\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.051463 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xn2k\" (UniqueName: \"kubernetes.io/projected/70f17bba-bccc-4cec-92ac-20d50fe48ed8-kube-api-access-4xn2k\") pod \"barbican-worker-64767b4cf5-g7ntw\" (UID: \"70f17bba-bccc-4cec-92ac-20d50fe48ed8\") " pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.057911 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7db56589cb-hzwrj"] Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.059290 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.064122 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.068690 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4fkj\" (UniqueName: \"kubernetes.io/projected/323388ac-fb46-49d8-9645-a7fa0bf0fbfe-kube-api-access-g4fkj\") pod \"barbican-keystone-listener-5ffcd48446-zlpmv\" (UID: \"323388ac-fb46-49d8-9645-a7fa0bf0fbfe\") " pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.071091 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7db56589cb-hzwrj"] Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118053 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118098 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36847140-0ea3-4683-a408-8563e20a543a-logs\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118122 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lbg8\" (UniqueName: \"kubernetes.io/projected/21ca6373-245c-4687-92af-516683b180f5-kube-api-access-4lbg8\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118147 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-config\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118178 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118219 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118263 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-combined-ca-bundle\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118295 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data-custom\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118313 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2ws\" (UniqueName: \"kubernetes.io/projected/36847140-0ea3-4683-a408-8563e20a543a-kube-api-access-sj2ws\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118336 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.118352 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.120262 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.121068 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.123543 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.124034 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.125090 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-config\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.138435 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lbg8\" (UniqueName: \"kubernetes.io/projected/21ca6373-245c-4687-92af-516683b180f5-kube-api-access-4lbg8\") pod \"dnsmasq-dns-75c8ddd69c-mzp8p\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.182709 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64767b4cf5-g7ntw" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.207462 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.220041 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.220145 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-combined-ca-bundle\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.220181 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data-custom\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.220201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2ws\" (UniqueName: \"kubernetes.io/projected/36847140-0ea3-4683-a408-8563e20a543a-kube-api-access-sj2ws\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.220256 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36847140-0ea3-4683-a408-8563e20a543a-logs\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.220601 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36847140-0ea3-4683-a408-8563e20a543a-logs\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.226125 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data-custom\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.226139 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-combined-ca-bundle\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.226494 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.246328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2ws\" (UniqueName: \"kubernetes.io/projected/36847140-0ea3-4683-a408-8563e20a543a-kube-api-access-sj2ws\") pod \"barbican-api-7db56589cb-hzwrj\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.288809 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.441441 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.480096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerStarted","Data":"2e3bf9af39dd7fb3d5d5dee060ab1cec7bfe07407534286937d75e11269ab7a6"} Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.480216 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-central-agent" containerID="cri-o://a998e4cbb2fa3bb6c533bcc701c44070e3c879ba38cd8a4b6b970daaeb39c7ae" gracePeriod=30 Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.480370 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="proxy-httpd" containerID="cri-o://2e3bf9af39dd7fb3d5d5dee060ab1cec7bfe07407534286937d75e11269ab7a6" gracePeriod=30 Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.480414 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="sg-core" containerID="cri-o://376e9ad842a14a671e1a0e1441057b751a8dbf37dc7b59e73bc8401e27de8814" gracePeriod=30 Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.480451 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-notification-agent" containerID="cri-o://8a29434f0aef5c00f3dd1e472b797256aa5382e06d2a3c6743df8527264ea631" gracePeriod=30 Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.480589 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.502015 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.977142956 podStartE2EDuration="54.501997395s" podCreationTimestamp="2026-02-02 09:15:41 +0000 UTC" firstStartedPulling="2026-02-02 09:15:43.108901515 +0000 UTC m=+1176.964527071" lastFinishedPulling="2026-02-02 09:16:34.633755934 +0000 UTC m=+1228.489381510" observedRunningTime="2026-02-02 09:16:35.500346874 +0000 UTC m=+1229.355972430" watchObservedRunningTime="2026-02-02 09:16:35.501997395 +0000 UTC m=+1229.357622951" Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.629999 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64767b4cf5-g7ntw"] Feb 02 09:16:35 crc kubenswrapper[4720]: W0202 09:16:35.630061 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70f17bba_bccc_4cec_92ac_20d50fe48ed8.slice/crio-bfd18c895a143398407e8e66526d01270bee4be9e6e83500884f2a955dc5a09f WatchSource:0}: Error finding container bfd18c895a143398407e8e66526d01270bee4be9e6e83500884f2a955dc5a09f: Status 404 returned error can't find the container with id bfd18c895a143398407e8e66526d01270bee4be9e6e83500884f2a955dc5a09f Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.721130 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5ffcd48446-zlpmv"] Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.839509 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mzp8p"] Feb 02 09:16:35 crc kubenswrapper[4720]: W0202 09:16:35.859999 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ca6373_245c_4687_92af_516683b180f5.slice/crio-7d14ae14b60bc37b97f76b2177a6c78105d886c734a51b68ec16d42f6e6dcb95 WatchSource:0}: Error finding container 7d14ae14b60bc37b97f76b2177a6c78105d886c734a51b68ec16d42f6e6dcb95: Status 404 returned error can't find the container with id 7d14ae14b60bc37b97f76b2177a6c78105d886c734a51b68ec16d42f6e6dcb95 Feb 02 09:16:35 crc kubenswrapper[4720]: I0202 09:16:35.945792 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.037185 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-scripts\") pod \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038286 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-config-data\") pod \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038324 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-etc-machine-id\") pod \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038356 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-db-sync-config-data\") pod \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038376 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gdcv\" (UniqueName: \"kubernetes.io/projected/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-kube-api-access-6gdcv\") pod \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038510 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-combined-ca-bundle\") pod \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\" (UID: \"691b5691-2178-4f8e-a40c-7dfe5bec0f1b\") " Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038515 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "691b5691-2178-4f8e-a40c-7dfe5bec0f1b" (UID: "691b5691-2178-4f8e-a40c-7dfe5bec0f1b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.038855 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.040582 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-scripts" (OuterVolumeSpecName: "scripts") pod "691b5691-2178-4f8e-a40c-7dfe5bec0f1b" (UID: "691b5691-2178-4f8e-a40c-7dfe5bec0f1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.042278 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-kube-api-access-6gdcv" (OuterVolumeSpecName: "kube-api-access-6gdcv") pod "691b5691-2178-4f8e-a40c-7dfe5bec0f1b" (UID: "691b5691-2178-4f8e-a40c-7dfe5bec0f1b"). InnerVolumeSpecName "kube-api-access-6gdcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.042759 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "691b5691-2178-4f8e-a40c-7dfe5bec0f1b" (UID: "691b5691-2178-4f8e-a40c-7dfe5bec0f1b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.068614 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "691b5691-2178-4f8e-a40c-7dfe5bec0f1b" (UID: "691b5691-2178-4f8e-a40c-7dfe5bec0f1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:36 crc kubenswrapper[4720]: W0202 09:16:36.092547 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36847140_0ea3_4683_a408_8563e20a543a.slice/crio-3290be6410d8e81c79b8f749e7ba54a2f31f41736837afd833134aa4860d6bdc WatchSource:0}: Error finding container 3290be6410d8e81c79b8f749e7ba54a2f31f41736837afd833134aa4860d6bdc: Status 404 returned error can't find the container with id 3290be6410d8e81c79b8f749e7ba54a2f31f41736837afd833134aa4860d6bdc Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.099477 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7db56589cb-hzwrj"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.110197 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-config-data" (OuterVolumeSpecName: "config-data") pod "691b5691-2178-4f8e-a40c-7dfe5bec0f1b" (UID: "691b5691-2178-4f8e-a40c-7dfe5bec0f1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.140629 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.140662 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.140673 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.140685 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.140695 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gdcv\" (UniqueName: \"kubernetes.io/projected/691b5691-2178-4f8e-a40c-7dfe5bec0f1b-kube-api-access-6gdcv\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.495905 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7db56589cb-hzwrj" event={"ID":"36847140-0ea3-4683-a408-8563e20a543a","Type":"ContainerStarted","Data":"0a9be0241df1a16f26b27e8b87dc94bdbd3545abeba84b765a69bafd98906c59"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.495956 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7db56589cb-hzwrj" event={"ID":"36847140-0ea3-4683-a408-8563e20a543a","Type":"ContainerStarted","Data":"3290be6410d8e81c79b8f749e7ba54a2f31f41736837afd833134aa4860d6bdc"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.498538 4720 generic.go:334] "Generic (PLEG): container finished" podID="21ca6373-245c-4687-92af-516683b180f5" containerID="fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7" exitCode=0 Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.498621 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" event={"ID":"21ca6373-245c-4687-92af-516683b180f5","Type":"ContainerDied","Data":"fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.498676 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" event={"ID":"21ca6373-245c-4687-92af-516683b180f5","Type":"ContainerStarted","Data":"7d14ae14b60bc37b97f76b2177a6c78105d886c734a51b68ec16d42f6e6dcb95"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.500211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mcqm2" event={"ID":"691b5691-2178-4f8e-a40c-7dfe5bec0f1b","Type":"ContainerDied","Data":"013ee3c92d9ffd4203dbe0410d3203a9f8dc28be48b73e93dc5e8b55559503f0"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.500253 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013ee3c92d9ffd4203dbe0410d3203a9f8dc28be48b73e93dc5e8b55559503f0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.500273 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mcqm2" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.504227 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64767b4cf5-g7ntw" event={"ID":"70f17bba-bccc-4cec-92ac-20d50fe48ed8","Type":"ContainerStarted","Data":"bfd18c895a143398407e8e66526d01270bee4be9e6e83500884f2a955dc5a09f"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.520260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerDied","Data":"2e3bf9af39dd7fb3d5d5dee060ab1cec7bfe07407534286937d75e11269ab7a6"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.520501 4720 generic.go:334] "Generic (PLEG): container finished" podID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerID="2e3bf9af39dd7fb3d5d5dee060ab1cec7bfe07407534286937d75e11269ab7a6" exitCode=0 Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.520532 4720 generic.go:334] "Generic (PLEG): container finished" podID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerID="376e9ad842a14a671e1a0e1441057b751a8dbf37dc7b59e73bc8401e27de8814" exitCode=2 Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.520539 4720 generic.go:334] "Generic (PLEG): container finished" podID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerID="a998e4cbb2fa3bb6c533bcc701c44070e3c879ba38cd8a4b6b970daaeb39c7ae" exitCode=0 Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.520583 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerDied","Data":"376e9ad842a14a671e1a0e1441057b751a8dbf37dc7b59e73bc8401e27de8814"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.520616 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerDied","Data":"a998e4cbb2fa3bb6c533bcc701c44070e3c879ba38cd8a4b6b970daaeb39c7ae"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.526629 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" event={"ID":"323388ac-fb46-49d8-9645-a7fa0bf0fbfe","Type":"ContainerStarted","Data":"856863fc6cc82552dc55a95bbefab513fb5be48e28dbaf7d830fdb2082d43e66"} Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.738941 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:36 crc kubenswrapper[4720]: E0202 09:16:36.739567 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" containerName="cinder-db-sync" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.739585 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" containerName="cinder-db-sync" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.739763 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" containerName="cinder-db-sync" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.741040 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.744413 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.745657 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.746571 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.746840 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ptvgr" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.771510 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.828048 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mzp8p"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.854863 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.854937 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.854968 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.855027 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsz6\" (UniqueName: \"kubernetes.io/projected/d2f53fcb-687a-4a01-9949-6c50248fd792-kube-api-access-tnsz6\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.855064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2f53fcb-687a-4a01-9949-6c50248fd792-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.855098 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.855178 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-zfcgg"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.856559 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.861580 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-zfcgg"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.936281 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.937687 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.944478 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.955390 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-config\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960545 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsz6\" (UniqueName: \"kubernetes.io/projected/d2f53fcb-687a-4a01-9949-6c50248fd792-kube-api-access-tnsz6\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960565 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wkx\" (UniqueName: \"kubernetes.io/projected/6f54a575-b00e-4748-ab42-499cf997a92c-kube-api-access-z4wkx\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2f53fcb-687a-4a01-9949-6c50248fd792-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960725 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960745 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960766 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-svc\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960870 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960913 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.960934 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.961382 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2f53fcb-687a-4a01-9949-6c50248fd792-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.968713 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.968906 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.971293 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.978719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.992920 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:36 crc kubenswrapper[4720]: I0202 09:16:36.994378 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.007262 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.010351 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsz6\" (UniqueName: \"kubernetes.io/projected/d2f53fcb-687a-4a01-9949-6c50248fd792-kube-api-access-tnsz6\") pod \"cinder-scheduler-0\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.039345 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063025 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063079 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063100 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-lib-modules\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063127 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063144 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063159 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqctt\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-kube-api-access-sqctt\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063186 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-run\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-svc\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063282 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-sys\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063300 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063321 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063339 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063360 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063377 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063392 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data-custom\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063456 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-nvme\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063484 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-run\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063531 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-dev\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063555 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-ceph\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063577 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063597 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063621 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8h8\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-kube-api-access-8s8h8\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063669 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063689 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063734 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063750 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-sys\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063778 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-config\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063815 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063830 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-scripts\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.063853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wkx\" (UniqueName: \"kubernetes.io/projected/6f54a575-b00e-4748-ab42-499cf997a92c-kube-api-access-z4wkx\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.064723 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-config\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.067042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-dev\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.067367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.071699 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.072065 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.076419 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-svc\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.082380 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.119114 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wkx\" (UniqueName: \"kubernetes.io/projected/6f54a575-b00e-4748-ab42-499cf997a92c-kube-api-access-z4wkx\") pod \"dnsmasq-dns-5784cf869f-zfcgg\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.127910 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.129220 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.135984 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.139300 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168487 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-ceph\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168570 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168601 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168630 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8h8\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-kube-api-access-8s8h8\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168648 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168683 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168712 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168726 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-sys\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168760 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168779 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-scripts\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-dev\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168822 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168843 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168859 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-lib-modules\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168899 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168916 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168937 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqctt\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-kube-api-access-sqctt\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168954 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-run\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168977 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-sys\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.168996 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169017 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169034 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169171 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data-custom\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169203 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-nvme\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169219 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169238 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-dev\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.169759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.171212 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.171286 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.171583 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-run\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.171610 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-sys\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.177522 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.185151 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.185200 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-dev\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.185224 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-sys\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.185248 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.185959 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.187503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.187762 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190374 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-nvme\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190449 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-dev\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190469 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190494 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.190931 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-lib-modules\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.192518 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.193836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-scripts\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.196237 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-ceph\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.196393 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-run\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.198109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.199476 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.200334 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-run\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.200572 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.200791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.205412 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.207139 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data-custom\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.236495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8h8\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-kube-api-access-8s8h8\") pod \"cinder-backup-0\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.239477 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqctt\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-kube-api-access-sqctt\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.245643 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303483 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qnd9\" (UniqueName: \"kubernetes.io/projected/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-kube-api-access-8qnd9\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303619 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data-custom\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303669 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303711 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-logs\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303769 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.303809 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-scripts\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.372683 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.406955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-scripts\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407080 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qnd9\" (UniqueName: \"kubernetes.io/projected/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-kube-api-access-8qnd9\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407129 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data-custom\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407189 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407219 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-logs\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.407680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-logs\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.408066 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.414569 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-scripts\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.416286 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.425310 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qnd9\" (UniqueName: \"kubernetes.io/projected/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-kube-api-access-8qnd9\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.428000 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data-custom\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.428120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data\") pod \"cinder-api-0\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " pod="openstack/cinder-api-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.466282 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:37 crc kubenswrapper[4720]: I0202 09:16:37.553595 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.553756 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" event={"ID":"21ca6373-245c-4687-92af-516683b180f5","Type":"ContainerStarted","Data":"5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067"} Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.554436 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" podUID="21ca6373-245c-4687-92af-516683b180f5" containerName="dnsmasq-dns" containerID="cri-o://5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067" gracePeriod=10 Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.554521 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.561343 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64767b4cf5-g7ntw" event={"ID":"70f17bba-bccc-4cec-92ac-20d50fe48ed8","Type":"ContainerStarted","Data":"ff9ab973fed5ff188d37fdd33bc61c5762bd1f80c108481d0fecc537cbe1df87"} Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.566397 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" event={"ID":"323388ac-fb46-49d8-9645-a7fa0bf0fbfe","Type":"ContainerStarted","Data":"958574c3b22fc7048d5ac2dbb8a95718bc2ae97e8c700b4db5f12e3a058752fe"} Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.569080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7db56589cb-hzwrj" event={"ID":"36847140-0ea3-4683-a408-8563e20a543a","Type":"ContainerStarted","Data":"0499c590610fc5cb47e1f89d7ef38de9c057dc7e428daed25d7bb0eb40ddd90d"} Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.569754 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.569906 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:38 crc kubenswrapper[4720]: W0202 09:16:38.590971 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01974d03_b4c4_4ed6_99fe_a49cd815e6f2.slice/crio-27a587c8aacd4a4d4040e5022ea25fe1aad60138d63e2086b5f0aecdd3679a9a WatchSource:0}: Error finding container 27a587c8aacd4a4d4040e5022ea25fe1aad60138d63e2086b5f0aecdd3679a9a: Status 404 returned error can't find the container with id 27a587c8aacd4a4d4040e5022ea25fe1aad60138d63e2086b5f0aecdd3679a9a Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.610853 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.628622 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.634623 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" podStartSLOduration=4.634552753 podStartE2EDuration="4.634552753s" podCreationTimestamp="2026-02-02 09:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:38.581457714 +0000 UTC m=+1232.437083270" watchObservedRunningTime="2026-02-02 09:16:38.634552753 +0000 UTC m=+1232.490178309" Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.733338 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7db56589cb-hzwrj" podStartSLOduration=3.7333152739999997 podStartE2EDuration="3.733315274s" podCreationTimestamp="2026-02-02 09:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:38.597681698 +0000 UTC m=+1232.453307254" watchObservedRunningTime="2026-02-02 09:16:38.733315274 +0000 UTC m=+1232.588940830" Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.769047 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-zfcgg"] Feb 02 09:16:38 crc kubenswrapper[4720]: I0202 09:16:38.850240 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.020983 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:39 crc kubenswrapper[4720]: W0202 09:16:39.056491 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod468e2e04_844c_47a2_a554_1fff701d0802.slice/crio-0857059f83d290e655a58161c20492be9d15b8c9711a8ba861969a10aff839ce WatchSource:0}: Error finding container 0857059f83d290e655a58161c20492be9d15b8c9711a8ba861969a10aff839ce: Status 404 returned error can't find the container with id 0857059f83d290e655a58161c20492be9d15b8c9711a8ba861969a10aff839ce Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.210253 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.356667 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-swift-storage-0\") pod \"21ca6373-245c-4687-92af-516683b180f5\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.356767 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-config\") pod \"21ca6373-245c-4687-92af-516683b180f5\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.356873 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-svc\") pod \"21ca6373-245c-4687-92af-516683b180f5\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.356942 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-nb\") pod \"21ca6373-245c-4687-92af-516683b180f5\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.356994 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-sb\") pod \"21ca6373-245c-4687-92af-516683b180f5\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.357054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lbg8\" (UniqueName: \"kubernetes.io/projected/21ca6373-245c-4687-92af-516683b180f5-kube-api-access-4lbg8\") pod \"21ca6373-245c-4687-92af-516683b180f5\" (UID: \"21ca6373-245c-4687-92af-516683b180f5\") " Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.361978 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ca6373-245c-4687-92af-516683b180f5-kube-api-access-4lbg8" (OuterVolumeSpecName: "kube-api-access-4lbg8") pod "21ca6373-245c-4687-92af-516683b180f5" (UID: "21ca6373-245c-4687-92af-516683b180f5"). InnerVolumeSpecName "kube-api-access-4lbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.412592 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-config" (OuterVolumeSpecName: "config") pod "21ca6373-245c-4687-92af-516683b180f5" (UID: "21ca6373-245c-4687-92af-516683b180f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.413085 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21ca6373-245c-4687-92af-516683b180f5" (UID: "21ca6373-245c-4687-92af-516683b180f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.422083 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21ca6373-245c-4687-92af-516683b180f5" (UID: "21ca6373-245c-4687-92af-516683b180f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.437476 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21ca6373-245c-4687-92af-516683b180f5" (UID: "21ca6373-245c-4687-92af-516683b180f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.441680 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21ca6373-245c-4687-92af-516683b180f5" (UID: "21ca6373-245c-4687-92af-516683b180f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.460381 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.460413 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.460422 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.460431 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.460441 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21ca6373-245c-4687-92af-516683b180f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.460450 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lbg8\" (UniqueName: \"kubernetes.io/projected/21ca6373-245c-4687-92af-516683b180f5-kube-api-access-4lbg8\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.608100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"12cad3b9-cfe4-4bea-89b3-8cf8ec552906","Type":"ContainerStarted","Data":"085c6eec5b8a8084e7a82b0b2c56021f05c9354b1a64d813b3c9b9cec4a8cb01"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.613864 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"468e2e04-844c-47a2-a554-1fff701d0802","Type":"ContainerStarted","Data":"0857059f83d290e655a58161c20492be9d15b8c9711a8ba861969a10aff839ce"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.617600 4720 generic.go:334] "Generic (PLEG): container finished" podID="6f54a575-b00e-4748-ab42-499cf997a92c" containerID="df59cbf8b37a42d8fb62d192c2c7f492c56972e8ee25118cc1b80abacf35528e" exitCode=0 Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.617679 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" event={"ID":"6f54a575-b00e-4748-ab42-499cf997a92c","Type":"ContainerDied","Data":"df59cbf8b37a42d8fb62d192c2c7f492c56972e8ee25118cc1b80abacf35528e"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.617713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" event={"ID":"6f54a575-b00e-4748-ab42-499cf997a92c","Type":"ContainerStarted","Data":"ce32023eb8c6f47c283281db7b9abae7616164b5139fbf89e3b27323a0b166ab"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.620782 4720 generic.go:334] "Generic (PLEG): container finished" podID="21ca6373-245c-4687-92af-516683b180f5" containerID="5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067" exitCode=0 Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.620889 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.620869 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" event={"ID":"21ca6373-245c-4687-92af-516683b180f5","Type":"ContainerDied","Data":"5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.621062 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mzp8p" event={"ID":"21ca6373-245c-4687-92af-516683b180f5","Type":"ContainerDied","Data":"7d14ae14b60bc37b97f76b2177a6c78105d886c734a51b68ec16d42f6e6dcb95"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.621133 4720 scope.go:117] "RemoveContainer" containerID="5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.625990 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01974d03-b4c4-4ed6-99fe-a49cd815e6f2","Type":"ContainerStarted","Data":"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.626036 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01974d03-b4c4-4ed6-99fe-a49cd815e6f2","Type":"ContainerStarted","Data":"27a587c8aacd4a4d4040e5022ea25fe1aad60138d63e2086b5f0aecdd3679a9a"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.632479 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64767b4cf5-g7ntw" event={"ID":"70f17bba-bccc-4cec-92ac-20d50fe48ed8","Type":"ContainerStarted","Data":"b8cee5f80a9faf52feff9b7c1d60b8d7655cabcff211e6a7f2adf463749c25a9"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.648203 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2f53fcb-687a-4a01-9949-6c50248fd792","Type":"ContainerStarted","Data":"e960e9fd497853c087ed777439c38140bdb473db5616070e0ecf889e851a0966"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.651401 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" event={"ID":"323388ac-fb46-49d8-9645-a7fa0bf0fbfe","Type":"ContainerStarted","Data":"8de6ec7cec651bc6870b35d4592e1181144300aabd10c71b7b220e6331b0c251"} Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.669480 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-64767b4cf5-g7ntw" podStartSLOduration=3.238081605 podStartE2EDuration="5.669452711s" podCreationTimestamp="2026-02-02 09:16:34 +0000 UTC" firstStartedPulling="2026-02-02 09:16:35.634666287 +0000 UTC m=+1229.490291853" lastFinishedPulling="2026-02-02 09:16:38.066037403 +0000 UTC m=+1231.921662959" observedRunningTime="2026-02-02 09:16:39.658008289 +0000 UTC m=+1233.513633855" watchObservedRunningTime="2026-02-02 09:16:39.669452711 +0000 UTC m=+1233.525078267" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.700483 4720 scope.go:117] "RemoveContainer" containerID="fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.701915 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5ffcd48446-zlpmv" podStartSLOduration=3.35181313 podStartE2EDuration="5.701877379s" podCreationTimestamp="2026-02-02 09:16:34 +0000 UTC" firstStartedPulling="2026-02-02 09:16:35.760308725 +0000 UTC m=+1229.615934281" lastFinishedPulling="2026-02-02 09:16:38.110372974 +0000 UTC m=+1231.965998530" observedRunningTime="2026-02-02 09:16:39.694243979 +0000 UTC m=+1233.549869535" watchObservedRunningTime="2026-02-02 09:16:39.701877379 +0000 UTC m=+1233.557502935" Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.859534 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mzp8p"] Feb 02 09:16:39 crc kubenswrapper[4720]: I0202 09:16:39.868544 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mzp8p"] Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.094739 4720 scope.go:117] "RemoveContainer" containerID="5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067" Feb 02 09:16:40 crc kubenswrapper[4720]: E0202 09:16:40.095427 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067\": container with ID starting with 5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067 not found: ID does not exist" containerID="5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.095473 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067"} err="failed to get container status \"5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067\": rpc error: code = NotFound desc = could not find container \"5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067\": container with ID starting with 5fa939311963953a95820d20fd83d30fe3282dbd524c37ce93a719eced7bd067 not found: ID does not exist" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.095505 4720 scope.go:117] "RemoveContainer" containerID="fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7" Feb 02 09:16:40 crc kubenswrapper[4720]: E0202 09:16:40.096009 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7\": container with ID starting with fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7 not found: ID does not exist" containerID="fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.096039 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7"} err="failed to get container status \"fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7\": rpc error: code = NotFound desc = could not find container \"fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7\": container with ID starting with fd6ef9f76343c7f510475c5e96199968dc786105f125b72a63840c20504cc3c7 not found: ID does not exist" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.401147 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.665126 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" event={"ID":"6f54a575-b00e-4748-ab42-499cf997a92c","Type":"ContainerStarted","Data":"24734a6bc9622ce8015fe7f60caa143aaa6dd4ac51582e1a2b14c4739832328c"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.665734 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.671289 4720 generic.go:334] "Generic (PLEG): container finished" podID="1a624e5d-098a-44e1-95b7-fa398979891a" containerID="304bf2f7a1577cc0a68f62be1ea364ca057b1ff27d74958df943220cf45b8721" exitCode=0 Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.671413 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jspmg" event={"ID":"1a624e5d-098a-44e1-95b7-fa398979891a","Type":"ContainerDied","Data":"304bf2f7a1577cc0a68f62be1ea364ca057b1ff27d74958df943220cf45b8721"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.673459 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01974d03-b4c4-4ed6-99fe-a49cd815e6f2","Type":"ContainerStarted","Data":"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.673608 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api-log" containerID="cri-o://1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e" gracePeriod=30 Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.673944 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.674018 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api" containerID="cri-o://86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229" gracePeriod=30 Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.686691 4720 generic.go:334] "Generic (PLEG): container finished" podID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerID="8a29434f0aef5c00f3dd1e472b797256aa5382e06d2a3c6743df8527264ea631" exitCode=0 Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.686782 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerDied","Data":"8a29434f0aef5c00f3dd1e472b797256aa5382e06d2a3c6743df8527264ea631"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.686808 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06a1ffe6-f27c-4751-9872-186b2010e2f0","Type":"ContainerDied","Data":"7a4a373025a1272adfef77bafb8a8c887d4c9ced77f124421c5b1e5a39b15f7f"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.686819 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4a373025a1272adfef77bafb8a8c887d4c9ced77f124421c5b1e5a39b15f7f" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.693007 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"12cad3b9-cfe4-4bea-89b3-8cf8ec552906","Type":"ContainerStarted","Data":"9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.693115 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"12cad3b9-cfe4-4bea-89b3-8cf8ec552906","Type":"ContainerStarted","Data":"a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.695535 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"468e2e04-844c-47a2-a554-1fff701d0802","Type":"ContainerStarted","Data":"d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310"} Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.707563 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" podStartSLOduration=4.707546964 podStartE2EDuration="4.707546964s" podCreationTimestamp="2026-02-02 09:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:40.686154307 +0000 UTC m=+1234.541779863" watchObservedRunningTime="2026-02-02 09:16:40.707546964 +0000 UTC m=+1234.563172520" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.729199 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.729655 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.729645658 podStartE2EDuration="3.729645658s" podCreationTimestamp="2026-02-02 09:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:40.717865019 +0000 UTC m=+1234.573490565" watchObservedRunningTime="2026-02-02 09:16:40.729645658 +0000 UTC m=+1234.585271214" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.756803 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.6615583430000003 podStartE2EDuration="4.756788131s" podCreationTimestamp="2026-02-02 09:16:36 +0000 UTC" firstStartedPulling="2026-02-02 09:16:39.046228894 +0000 UTC m=+1232.901854450" lastFinishedPulling="2026-02-02 09:16:40.141458672 +0000 UTC m=+1233.997084238" observedRunningTime="2026-02-02 09:16:40.746050727 +0000 UTC m=+1234.601676283" watchObservedRunningTime="2026-02-02 09:16:40.756788131 +0000 UTC m=+1234.612413687" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.793189 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-combined-ca-bundle\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.802084 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-run-httpd\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.802139 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-log-httpd\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.802173 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-sg-core-conf-yaml\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.802235 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-scripts\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.802281 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd5rc\" (UniqueName: \"kubernetes.io/projected/06a1ffe6-f27c-4751-9872-186b2010e2f0-kube-api-access-vd5rc\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.802355 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-config-data\") pod \"06a1ffe6-f27c-4751-9872-186b2010e2f0\" (UID: \"06a1ffe6-f27c-4751-9872-186b2010e2f0\") " Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.803444 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.804547 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.806580 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.806771 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06a1ffe6-f27c-4751-9872-186b2010e2f0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.815268 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a1ffe6-f27c-4751-9872-186b2010e2f0-kube-api-access-vd5rc" (OuterVolumeSpecName: "kube-api-access-vd5rc") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "kube-api-access-vd5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.829064 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-scripts" (OuterVolumeSpecName: "scripts") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.866086 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.897939 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ca6373-245c-4687-92af-516683b180f5" path="/var/lib/kubelet/pods/21ca6373-245c-4687-92af-516683b180f5/volumes" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.908939 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.908964 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.908973 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd5rc\" (UniqueName: \"kubernetes.io/projected/06a1ffe6-f27c-4751-9872-186b2010e2f0-kube-api-access-vd5rc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.932110 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:40 crc kubenswrapper[4720]: I0202 09:16:40.985012 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-config-data" (OuterVolumeSpecName: "config-data") pod "06a1ffe6-f27c-4751-9872-186b2010e2f0" (UID: "06a1ffe6-f27c-4751-9872-186b2010e2f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.011069 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.011104 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a1ffe6-f27c-4751-9872-186b2010e2f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.256622 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.315397 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-scripts\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.315735 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-logs\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.315817 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data-custom\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.315931 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-etc-machine-id\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.315975 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.316006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qnd9\" (UniqueName: \"kubernetes.io/projected/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-kube-api-access-8qnd9\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.316068 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-combined-ca-bundle\") pod \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\" (UID: \"01974d03-b4c4-4ed6-99fe-a49cd815e6f2\") " Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.316072 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-logs" (OuterVolumeSpecName: "logs") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.316127 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.316428 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.316442 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.321983 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-kube-api-access-8qnd9" (OuterVolumeSpecName: "kube-api-access-8qnd9") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "kube-api-access-8qnd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.322355 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-scripts" (OuterVolumeSpecName: "scripts") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.322538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.352815 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.370761 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data" (OuterVolumeSpecName: "config-data") pod "01974d03-b4c4-4ed6-99fe-a49cd815e6f2" (UID: "01974d03-b4c4-4ed6-99fe-a49cd815e6f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.418081 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.418109 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.418121 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.418129 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qnd9\" (UniqueName: \"kubernetes.io/projected/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-kube-api-access-8qnd9\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.418138 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01974d03-b4c4-4ed6-99fe-a49cd815e6f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.709510 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2f53fcb-687a-4a01-9949-6c50248fd792","Type":"ContainerStarted","Data":"d53ca2f3e3bd40b35b881371f89a22d4e641c8aab8b4e6702bd2ba80a476743e"} Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.710517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2f53fcb-687a-4a01-9949-6c50248fd792","Type":"ContainerStarted","Data":"bab649082b43531730d16b530a1a8e13da2262a21bf1d9188f569f9817d1ba42"} Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.714758 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"468e2e04-844c-47a2-a554-1fff701d0802","Type":"ContainerStarted","Data":"b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64"} Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722232 4720 generic.go:334] "Generic (PLEG): container finished" podID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerID="86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229" exitCode=0 Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722274 4720 generic.go:334] "Generic (PLEG): container finished" podID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerID="1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e" exitCode=143 Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722350 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01974d03-b4c4-4ed6-99fe-a49cd815e6f2","Type":"ContainerDied","Data":"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229"} Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722530 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01974d03-b4c4-4ed6-99fe-a49cd815e6f2","Type":"ContainerDied","Data":"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e"} Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722550 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"01974d03-b4c4-4ed6-99fe-a49cd815e6f2","Type":"ContainerDied","Data":"27a587c8aacd4a4d4040e5022ea25fe1aad60138d63e2086b5f0aecdd3679a9a"} Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722590 4720 scope.go:117] "RemoveContainer" containerID="86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.722780 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.746046 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.80520523 podStartE2EDuration="5.746028747s" podCreationTimestamp="2026-02-02 09:16:36 +0000 UTC" firstStartedPulling="2026-02-02 09:16:38.683274108 +0000 UTC m=+1232.538899664" lastFinishedPulling="2026-02-02 09:16:39.624097615 +0000 UTC m=+1233.479723181" observedRunningTime="2026-02-02 09:16:41.733855098 +0000 UTC m=+1235.589480644" watchObservedRunningTime="2026-02-02 09:16:41.746028747 +0000 UTC m=+1235.601654293" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.780439 4720 scope.go:117] "RemoveContainer" containerID="1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.787243 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.726247736 podStartE2EDuration="5.787226063s" podCreationTimestamp="2026-02-02 09:16:36 +0000 UTC" firstStartedPulling="2026-02-02 09:16:39.063153705 +0000 UTC m=+1232.918779261" lastFinishedPulling="2026-02-02 09:16:40.124132022 +0000 UTC m=+1233.979757588" observedRunningTime="2026-02-02 09:16:41.76052878 +0000 UTC m=+1235.616154336" watchObservedRunningTime="2026-02-02 09:16:41.787226063 +0000 UTC m=+1235.642851609" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.818931 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.826929 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.849938 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850327 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-notification-agent" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850344 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-notification-agent" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850359 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-central-agent" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850366 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-central-agent" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850379 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="proxy-httpd" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850385 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="proxy-httpd" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850393 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ca6373-245c-4687-92af-516683b180f5" containerName="dnsmasq-dns" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850399 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ca6373-245c-4687-92af-516683b180f5" containerName="dnsmasq-dns" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850420 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="sg-core" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850426 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="sg-core" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850438 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850443 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850452 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api-log" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850457 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api-log" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.850470 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ca6373-245c-4687-92af-516683b180f5" containerName="init" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850476 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ca6373-245c-4687-92af-516683b180f5" containerName="init" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850628 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850646 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-notification-agent" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850658 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="sg-core" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850667 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="proxy-httpd" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850677 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" containerName="ceilometer-central-agent" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850685 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ca6373-245c-4687-92af-516683b180f5" containerName="dnsmasq-dns" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.850694 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" containerName="cinder-api-log" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.851624 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.868130 4720 scope.go:117] "RemoveContainer" containerID="86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.868558 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.869257 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.869383 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.874157 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229\": container with ID starting with 86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229 not found: ID does not exist" containerID="86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.874192 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229"} err="failed to get container status \"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229\": rpc error: code = NotFound desc = could not find container \"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229\": container with ID starting with 86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229 not found: ID does not exist" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.874218 4720 scope.go:117] "RemoveContainer" containerID="1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e" Feb 02 09:16:41 crc kubenswrapper[4720]: E0202 09:16:41.875030 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e\": container with ID starting with 1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e not found: ID does not exist" containerID="1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.875086 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e"} err="failed to get container status \"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e\": rpc error: code = NotFound desc = could not find container \"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e\": container with ID starting with 1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e not found: ID does not exist" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.875113 4720 scope.go:117] "RemoveContainer" containerID="86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.875485 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229"} err="failed to get container status \"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229\": rpc error: code = NotFound desc = could not find container \"86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229\": container with ID starting with 86abfdd9caf3f4a0f04162f913399643065c8df25481b184ace676b75b212229 not found: ID does not exist" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.875503 4720 scope.go:117] "RemoveContainer" containerID="1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.875995 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e"} err="failed to get container status \"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e\": rpc error: code = NotFound desc = could not find container \"1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e\": container with ID starting with 1e0f5bcd6bde09872ea66a5e3367880a837ec5e42fec5d3f87c516ee138fb61e not found: ID does not exist" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.878852 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.917191 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.928849 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.945963 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.970196 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.971911 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-config-data\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.971997 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fdd8300-935b-4abd-b4c3-2a3894f613ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972022 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972048 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-scripts\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972210 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972239 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fdd8300-935b-4abd-b4c3-2a3894f613ed-logs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.972269 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjh7m\" (UniqueName: \"kubernetes.io/projected/1fdd8300-935b-4abd-b4c3-2a3894f613ed-kube-api-access-cjh7m\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.979021 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:16:41 crc kubenswrapper[4720]: I0202 09:16:41.979203 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.010175 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074294 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074359 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-log-httpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074391 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-scripts\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074494 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-run-httpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-config-data\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074604 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074626 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fdd8300-935b-4abd-b4c3-2a3894f613ed-logs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074643 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-scripts\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074678 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjh7m\" (UniqueName: \"kubernetes.io/projected/1fdd8300-935b-4abd-b4c3-2a3894f613ed-kube-api-access-cjh7m\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074707 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmpd\" (UniqueName: \"kubernetes.io/projected/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-kube-api-access-2cmpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-config-data\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fdd8300-935b-4abd-b4c3-2a3894f613ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074915 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.074936 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.075331 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fdd8300-935b-4abd-b4c3-2a3894f613ed-logs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.077303 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fdd8300-935b-4abd-b4c3-2a3894f613ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.080336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.085024 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.085414 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.092295 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-scripts\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.115290 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.115388 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-754d8f7774-zcmq5"] Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.115627 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.119247 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.124151 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.124322 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.125081 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjh7m\" (UniqueName: \"kubernetes.io/projected/1fdd8300-935b-4abd-b4c3-2a3894f613ed-kube-api-access-cjh7m\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.140788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fdd8300-935b-4abd-b4c3-2a3894f613ed-config-data\") pod \"cinder-api-0\" (UID: \"1fdd8300-935b-4abd-b4c3-2a3894f613ed\") " pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.141408 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-754d8f7774-zcmq5"] Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176035 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176071 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-log-httpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176093 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176135 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-run-httpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-config-data\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176182 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-scripts\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.176210 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmpd\" (UniqueName: \"kubernetes.io/projected/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-kube-api-access-2cmpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.177729 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-run-httpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.178214 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-log-httpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.183467 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-scripts\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.186436 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.188836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-config-data\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.194760 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.196749 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.196964 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmpd\" (UniqueName: \"kubernetes.io/projected/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-kube-api-access-2cmpd\") pod \"ceilometer-0\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282154 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-config-data\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282216 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zh4t\" (UniqueName: \"kubernetes.io/projected/71cd4ff5-a131-4208-9f0c-bc9651093d43-kube-api-access-9zh4t\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282309 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71cd4ff5-a131-4208-9f0c-bc9651093d43-logs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-public-tls-certs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282406 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-combined-ca-bundle\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282459 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-internal-tls-certs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.282482 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-config-data-custom\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.301543 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.349676 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jspmg" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.373571 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386150 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71cd4ff5-a131-4208-9f0c-bc9651093d43-logs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-public-tls-certs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386261 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-combined-ca-bundle\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386302 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-internal-tls-certs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386326 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-config-data-custom\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386359 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-config-data\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386390 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zh4t\" (UniqueName: \"kubernetes.io/projected/71cd4ff5-a131-4208-9f0c-bc9651093d43-kube-api-access-9zh4t\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.386743 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71cd4ff5-a131-4208-9f0c-bc9651093d43-logs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.390917 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-config-data-custom\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.397225 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-public-tls-certs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.397386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-combined-ca-bundle\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.398079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-config-data\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.401506 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cd4ff5-a131-4208-9f0c-bc9651093d43-internal-tls-certs\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.403143 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zh4t\" (UniqueName: \"kubernetes.io/projected/71cd4ff5-a131-4208-9f0c-bc9651093d43-kube-api-access-9zh4t\") pod \"barbican-api-754d8f7774-zcmq5\" (UID: \"71cd4ff5-a131-4208-9f0c-bc9651093d43\") " pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.467713 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.487310 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-combined-ca-bundle\") pod \"1a624e5d-098a-44e1-95b7-fa398979891a\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.487581 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9rg\" (UniqueName: \"kubernetes.io/projected/1a624e5d-098a-44e1-95b7-fa398979891a-kube-api-access-hm9rg\") pod \"1a624e5d-098a-44e1-95b7-fa398979891a\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.487688 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-config-data\") pod \"1a624e5d-098a-44e1-95b7-fa398979891a\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.487714 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-job-config-data\") pod \"1a624e5d-098a-44e1-95b7-fa398979891a\" (UID: \"1a624e5d-098a-44e1-95b7-fa398979891a\") " Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.492301 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a624e5d-098a-44e1-95b7-fa398979891a-kube-api-access-hm9rg" (OuterVolumeSpecName: "kube-api-access-hm9rg") pod "1a624e5d-098a-44e1-95b7-fa398979891a" (UID: "1a624e5d-098a-44e1-95b7-fa398979891a"). InnerVolumeSpecName "kube-api-access-hm9rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.495222 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "1a624e5d-098a-44e1-95b7-fa398979891a" (UID: "1a624e5d-098a-44e1-95b7-fa398979891a"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.498108 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-config-data" (OuterVolumeSpecName: "config-data") pod "1a624e5d-098a-44e1-95b7-fa398979891a" (UID: "1a624e5d-098a-44e1-95b7-fa398979891a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.524953 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a624e5d-098a-44e1-95b7-fa398979891a" (UID: "1a624e5d-098a-44e1-95b7-fa398979891a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.590424 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.590471 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm9rg\" (UniqueName: \"kubernetes.io/projected/1a624e5d-098a-44e1-95b7-fa398979891a-kube-api-access-hm9rg\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.590486 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.590512 4720 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1a624e5d-098a-44e1-95b7-fa398979891a-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.644796 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.698744 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 09:16:42 crc kubenswrapper[4720]: W0202 09:16:42.708154 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fdd8300_935b_4abd_b4c3_2a3894f613ed.slice/crio-9e43a68c4c7ad124dce7a208051f4a8b998f69fb2452fb817cf31aef5cb460f7 WatchSource:0}: Error finding container 9e43a68c4c7ad124dce7a208051f4a8b998f69fb2452fb817cf31aef5cb460f7: Status 404 returned error can't find the container with id 9e43a68c4c7ad124dce7a208051f4a8b998f69fb2452fb817cf31aef5cb460f7 Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.759293 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jspmg" event={"ID":"1a624e5d-098a-44e1-95b7-fa398979891a","Type":"ContainerDied","Data":"6d61a41204909a92d6d125235b52bc4890d4304f930ac62ca4ae189ce1e5a3a3"} Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.759561 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d61a41204909a92d6d125235b52bc4890d4304f930ac62ca4ae189ce1e5a3a3" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.759348 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jspmg" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.763480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fdd8300-935b-4abd-b4c3-2a3894f613ed","Type":"ContainerStarted","Data":"9e43a68c4c7ad124dce7a208051f4a8b998f69fb2452fb817cf31aef5cb460f7"} Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.828598 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.918645 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01974d03-b4c4-4ed6-99fe-a49cd815e6f2" path="/var/lib/kubelet/pods/01974d03-b4c4-4ed6-99fe-a49cd815e6f2/volumes" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.919517 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a1ffe6-f27c-4751-9872-186b2010e2f0" path="/var/lib/kubelet/pods/06a1ffe6-f27c-4751-9872-186b2010e2f0/volumes" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.980363 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:16:42 crc kubenswrapper[4720]: E0202 09:16:42.980994 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a624e5d-098a-44e1-95b7-fa398979891a" containerName="manila-db-sync" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.981015 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a624e5d-098a-44e1-95b7-fa398979891a" containerName="manila-db-sync" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.981219 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a624e5d-098a-44e1-95b7-fa398979891a" containerName="manila-db-sync" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.982114 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.986621 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-27w2w" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.986963 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.987159 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 09:16:42 crc kubenswrapper[4720]: I0202 09:16:42.987813 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.016140 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.120625 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc972d9-758e-4a27-9d67-ce14b4ece48b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.120682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-scripts\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.120781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.120911 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.121039 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.121075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzj27\" (UniqueName: \"kubernetes.io/projected/4bc972d9-758e-4a27-9d67-ce14b4ece48b-kube-api-access-vzj27\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.147377 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.151350 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.189500 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.259046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.259125 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.259184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.259206 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzj27\" (UniqueName: \"kubernetes.io/projected/4bc972d9-758e-4a27-9d67-ce14b4ece48b-kube-api-access-vzj27\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.259304 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc972d9-758e-4a27-9d67-ce14b4ece48b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.259319 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-scripts\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.261339 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc972d9-758e-4a27-9d67-ce14b4ece48b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.262011 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.284684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-scripts\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.285423 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.285564 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.286241 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.288387 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-zfcgg"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.288611 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" containerName="dnsmasq-dns" containerID="cri-o://24734a6bc9622ce8015fe7f60caa143aaa6dd4ac51582e1a2b14c4739832328c" gracePeriod=10 Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.296751 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzj27\" (UniqueName: \"kubernetes.io/projected/4bc972d9-758e-4a27-9d67-ce14b4ece48b-kube-api-access-vzj27\") pod \"manila-scheduler-0\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.325971 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-gdtzg"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.327438 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-gdtzg"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.327515 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.338925 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.340604 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.358676 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.359223 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360522 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360555 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-ceph\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360578 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-scripts\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360595 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360631 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlnx\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-kube-api-access-bmlnx\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360662 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360803 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.360960 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.371662 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlnx\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-kube-api-access-bmlnx\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-config\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463445 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463466 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-sb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463508 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-svc\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463537 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-nb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463584 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-swift-storage-0\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frsb\" (UniqueName: \"kubernetes.io/projected/b9363a36-d6cb-4d9d-b11e-bc62166728bd-kube-api-access-9frsb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463648 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463720 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-scripts\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463747 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2p2z\" (UniqueName: \"kubernetes.io/projected/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-kube-api-access-s2p2z\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-ceph\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463787 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-scripts\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463803 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data-custom\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463844 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-logs\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.463865 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-etc-machine-id\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.464812 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.468228 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.475281 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.478752 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-scripts\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.479193 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.482154 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.484297 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-ceph\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.484988 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlnx\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-kube-api-access-bmlnx\") pod \"manila-share-share1-0\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.489459 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-754d8f7774-zcmq5"] Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578308 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-svc\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-nb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-swift-storage-0\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578530 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frsb\" (UniqueName: \"kubernetes.io/projected/b9363a36-d6cb-4d9d-b11e-bc62166728bd-kube-api-access-9frsb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-scripts\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578773 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2p2z\" (UniqueName: \"kubernetes.io/projected/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-kube-api-access-s2p2z\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578836 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data-custom\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578872 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-logs\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.578966 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-etc-machine-id\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.579028 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-config\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.579088 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-sb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.579641 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-svc\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.579642 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-swift-storage-0\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.580250 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-sb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.580290 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-etc-machine-id\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.580326 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-logs\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.580420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-nb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.588178 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-config\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.591825 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.602562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data-custom\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.603535 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: W0202 09:16:43.604710 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71cd4ff5_a131_4208_9f0c_bc9651093d43.slice/crio-168034e995bc2b3a423db91d39a95d180634cbe20e31109fee847e89f9c2f45b WatchSource:0}: Error finding container 168034e995bc2b3a423db91d39a95d180634cbe20e31109fee847e89f9c2f45b: Status 404 returned error can't find the container with id 168034e995bc2b3a423db91d39a95d180634cbe20e31109fee847e89f9c2f45b Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.606572 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-scripts\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.612788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2p2z\" (UniqueName: \"kubernetes.io/projected/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-kube-api-access-s2p2z\") pod \"manila-api-0\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.650782 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8565544576-78c6h" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.661462 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.661864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frsb\" (UniqueName: \"kubernetes.io/projected/b9363a36-d6cb-4d9d-b11e-bc62166728bd-kube-api-access-9frsb\") pod \"dnsmasq-dns-5865f9d689-gdtzg\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.747383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.776287 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.894262 4720 generic.go:334] "Generic (PLEG): container finished" podID="6f54a575-b00e-4748-ab42-499cf997a92c" containerID="24734a6bc9622ce8015fe7f60caa143aaa6dd4ac51582e1a2b14c4739832328c" exitCode=0 Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.894652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" event={"ID":"6f54a575-b00e-4748-ab42-499cf997a92c","Type":"ContainerDied","Data":"24734a6bc9622ce8015fe7f60caa143aaa6dd4ac51582e1a2b14c4739832328c"} Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.927134 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerStarted","Data":"46847029799ff372116831cbeffd6df8e0a028cfc31628b0546d6a33d871d2e6"} Feb 02 09:16:43 crc kubenswrapper[4720]: I0202 09:16:43.963074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754d8f7774-zcmq5" event={"ID":"71cd4ff5-a131-4208-9f0c-bc9651093d43","Type":"ContainerStarted","Data":"168034e995bc2b3a423db91d39a95d180634cbe20e31109fee847e89f9c2f45b"} Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.026777 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f77897559-wqg4q"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.028210 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f77897559-wqg4q" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-api" containerID="cri-o://c981fe18a1377ea6a437d45330dc0b7963cc7073d1d2b4a26b838e1dbc6a75e1" gracePeriod=30 Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.031513 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f77897559-wqg4q" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-httpd" containerID="cri-o://4449717031a7014cf359eccae2c6706c30073c4d22cfcdfa9c79f07e6435855b" gracePeriod=30 Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.042578 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dc5779b69-676fs"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.044229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.062411 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f77897559-wqg4q" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": EOF" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.069381 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc5779b69-676fs"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.187716 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.192716 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.199248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-httpd-config\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.212284 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sldv\" (UniqueName: \"kubernetes.io/projected/3287a569-10ab-49e9-bf47-498b14a54b1c-kube-api-access-7sldv\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.212319 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-config\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.212343 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-internal-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.212697 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-public-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.212737 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-combined-ca-bundle\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.212837 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-ovndb-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.315647 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-config\") pod \"6f54a575-b00e-4748-ab42-499cf997a92c\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.315783 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-swift-storage-0\") pod \"6f54a575-b00e-4748-ab42-499cf997a92c\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.315976 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wkx\" (UniqueName: \"kubernetes.io/projected/6f54a575-b00e-4748-ab42-499cf997a92c-kube-api-access-z4wkx\") pod \"6f54a575-b00e-4748-ab42-499cf997a92c\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316014 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-sb\") pod \"6f54a575-b00e-4748-ab42-499cf997a92c\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316036 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-svc\") pod \"6f54a575-b00e-4748-ab42-499cf997a92c\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316089 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-nb\") pod \"6f54a575-b00e-4748-ab42-499cf997a92c\" (UID: \"6f54a575-b00e-4748-ab42-499cf997a92c\") " Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316345 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-public-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-combined-ca-bundle\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316422 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-ovndb-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316460 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-httpd-config\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sldv\" (UniqueName: \"kubernetes.io/projected/3287a569-10ab-49e9-bf47-498b14a54b1c-kube-api-access-7sldv\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316535 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-config\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.316550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-internal-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.334103 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-httpd-config\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.334664 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-ovndb-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.338485 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-internal-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.360207 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-public-tls-certs\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.361582 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-combined-ca-bundle\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.361740 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3287a569-10ab-49e9-bf47-498b14a54b1c-config\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.366177 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f54a575-b00e-4748-ab42-499cf997a92c-kube-api-access-z4wkx" (OuterVolumeSpecName: "kube-api-access-z4wkx") pod "6f54a575-b00e-4748-ab42-499cf997a92c" (UID: "6f54a575-b00e-4748-ab42-499cf997a92c"). InnerVolumeSpecName "kube-api-access-z4wkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.404399 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sldv\" (UniqueName: \"kubernetes.io/projected/3287a569-10ab-49e9-bf47-498b14a54b1c-kube-api-access-7sldv\") pod \"neutron-dc5779b69-676fs\" (UID: \"3287a569-10ab-49e9-bf47-498b14a54b1c\") " pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.419755 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wkx\" (UniqueName: \"kubernetes.io/projected/6f54a575-b00e-4748-ab42-499cf997a92c-kube-api-access-z4wkx\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.458930 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.581749 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.667437 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-gdtzg"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.754994 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f54a575-b00e-4748-ab42-499cf997a92c" (UID: "6f54a575-b00e-4748-ab42-499cf997a92c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.755702 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.817657 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f54a575-b00e-4748-ab42-499cf997a92c" (UID: "6f54a575-b00e-4748-ab42-499cf997a92c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.847340 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f54a575-b00e-4748-ab42-499cf997a92c" (UID: "6f54a575-b00e-4748-ab42-499cf997a92c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.859093 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-config" (OuterVolumeSpecName: "config") pod "6f54a575-b00e-4748-ab42-499cf997a92c" (UID: "6f54a575-b00e-4748-ab42-499cf997a92c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.860297 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.860322 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.860332 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.879956 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.901250 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f54a575-b00e-4748-ab42-499cf997a92c" (UID: "6f54a575-b00e-4748-ab42-499cf997a92c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:44 crc kubenswrapper[4720]: I0202 09:16:44.962662 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f54a575-b00e-4748-ab42-499cf997a92c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:44.999234 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fdd8300-935b-4abd-b4c3-2a3894f613ed","Type":"ContainerStarted","Data":"fa333ebf67ac1088343ae98ca1721e16195cd1b78b16f272237b0f27754534e4"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.008120 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" event={"ID":"6f54a575-b00e-4748-ab42-499cf997a92c","Type":"ContainerDied","Data":"ce32023eb8c6f47c283281db7b9abae7616164b5139fbf89e3b27323a0b166ab"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.008173 4720 scope.go:117] "RemoveContainer" containerID="24734a6bc9622ce8015fe7f60caa143aaa6dd4ac51582e1a2b14c4739832328c" Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.008295 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-zfcgg" Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.021712 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerStarted","Data":"c76f175a3035712ace8e082ee20d11fc9c85bbaaec885902443cc416ba7f3a4d"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.029178 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754d8f7774-zcmq5" event={"ID":"71cd4ff5-a131-4208-9f0c-bc9651093d43","Type":"ContainerStarted","Data":"b2565f069cc0286cf4d1d2b250d7a5235ee71a65258173de1bdf56589cafe789"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.031701 4720 generic.go:334] "Generic (PLEG): container finished" podID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerID="4449717031a7014cf359eccae2c6706c30073c4d22cfcdfa9c79f07e6435855b" exitCode=0 Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.031766 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f77897559-wqg4q" event={"ID":"fb096f16-61ca-432b-bc1a-42d9a4e12031","Type":"ContainerDied","Data":"4449717031a7014cf359eccae2c6706c30073c4d22cfcdfa9c79f07e6435855b"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.032617 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9f8bec59-9988-4424-9eab-98f0ea954808","Type":"ContainerStarted","Data":"1af5a3ee989b96b5f3f7836f23b8e11841050f085481193a98e1aefc5a9ab090"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.033289 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" event={"ID":"b9363a36-d6cb-4d9d-b11e-bc62166728bd","Type":"ContainerStarted","Data":"2284ded7e66fcb4a5623e1c75e5313e5e8c12297d477ebaa6c4ba426ad838199"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.033951 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4bc972d9-758e-4a27-9d67-ce14b4ece48b","Type":"ContainerStarted","Data":"5c72e8f4c8d4762000ea969511156ec8177a09bc86912860d82ba12990b2a912"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.034648 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cd0c52a2-cb03-4b51-a0cd-d0beaf320058","Type":"ContainerStarted","Data":"3c6a8c86547f85c49627e724036e83bdfa4976aaf29ad74e3b860792f29ebe21"} Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.085420 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-zfcgg"] Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.097706 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-zfcgg"] Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.185086 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc5779b69-676fs"] Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.298083 4720 scope.go:117] "RemoveContainer" containerID="df59cbf8b37a42d8fb62d192c2c7f492c56972e8ee25118cc1b80abacf35528e" Feb 02 09:16:45 crc kubenswrapper[4720]: I0202 09:16:45.409002 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6f77897559-wqg4q" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.162738 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-754d8f7774-zcmq5" event={"ID":"71cd4ff5-a131-4208-9f0c-bc9651093d43","Type":"ContainerStarted","Data":"bbe3d8b57792d691bafc39db66418c44ec65376b7a515ca930751b607b101bb5"} Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.164441 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.164478 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.219454 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.219522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc5779b69-676fs" event={"ID":"3287a569-10ab-49e9-bf47-498b14a54b1c","Type":"ContainerStarted","Data":"4a788602eb1ef4a8dd823e5017b990523602cef865ed5f0c308ec73ddd60373e"} Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.219547 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc5779b69-676fs" event={"ID":"3287a569-10ab-49e9-bf47-498b14a54b1c","Type":"ContainerStarted","Data":"ea6b7bcbc0e046fd711a164840dd445e08bcfd38078d638f5e1e11572c9f2fb3"} Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.249746 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-754d8f7774-zcmq5" podStartSLOduration=4.249728232 podStartE2EDuration="4.249728232s" podCreationTimestamp="2026-02-02 09:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:46.206092177 +0000 UTC m=+1240.061717733" watchObservedRunningTime="2026-02-02 09:16:46.249728232 +0000 UTC m=+1240.105353778" Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.290703 4720 generic.go:334] "Generic (PLEG): container finished" podID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerID="2efa60fb9a2225e9a40b531989b9c69a707b3e07606d1aff1fa9564501db38fb" exitCode=0 Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.290945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" event={"ID":"b9363a36-d6cb-4d9d-b11e-bc62166728bd","Type":"ContainerDied","Data":"2efa60fb9a2225e9a40b531989b9c69a707b3e07606d1aff1fa9564501db38fb"} Feb 02 09:16:46 crc kubenswrapper[4720]: I0202 09:16:46.964733 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" path="/var/lib/kubelet/pods/6f54a575-b00e-4748-ab42-499cf997a92c/volumes" Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.315454 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerStarted","Data":"c1914132af0e590e9744c50fa77eb7c6e206f4e81af63415a3aef690c411fc39"} Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.319760 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4bc972d9-758e-4a27-9d67-ce14b4ece48b","Type":"ContainerStarted","Data":"f67912c1cdbc3ee37193a4d855b6fda3ff2dc6756402c2cb13731af256916048"} Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.331753 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cd0c52a2-cb03-4b51-a0cd-d0beaf320058","Type":"ContainerStarted","Data":"44fe4394bf4705c46aa36a7d141124ad80d2445cfc8486b42c23dd5445d26b99"} Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.387256 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.433223 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.901622 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.902052 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.919851 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:16:47 crc kubenswrapper[4720]: I0202 09:16:47.951829 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.022352 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.033334 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.129893 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.375372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4bc972d9-758e-4a27-9d67-ce14b4ece48b","Type":"ContainerStarted","Data":"4b9b38d9502b2a20ad9dc133d4736acadf6582fbfcaef7638c5640330db51aa6"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.395778 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cd0c52a2-cb03-4b51-a0cd-d0beaf320058","Type":"ContainerStarted","Data":"8a41f39cc922d4c120179c65f4ae72b24378de8609ad7a5e6c2d389a4edcec45"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.396096 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api-log" containerID="cri-o://44fe4394bf4705c46aa36a7d141124ad80d2445cfc8486b42c23dd5445d26b99" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.396249 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.396290 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api" containerID="cri-o://8a41f39cc922d4c120179c65f4ae72b24378de8609ad7a5e6c2d389a4edcec45" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.407497 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.251026233 podStartE2EDuration="6.407472103s" podCreationTimestamp="2026-02-02 09:16:42 +0000 UTC" firstStartedPulling="2026-02-02 09:16:44.277939079 +0000 UTC m=+1238.133564635" lastFinishedPulling="2026-02-02 09:16:45.434384949 +0000 UTC m=+1239.290010505" observedRunningTime="2026-02-02 09:16:48.396045502 +0000 UTC m=+1242.251671058" watchObservedRunningTime="2026-02-02 09:16:48.407472103 +0000 UTC m=+1242.263097659" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.411195 4720 generic.go:334] "Generic (PLEG): container finished" podID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerID="c981fe18a1377ea6a437d45330dc0b7963cc7073d1d2b4a26b838e1dbc6a75e1" exitCode=0 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.411245 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f77897559-wqg4q" event={"ID":"fb096f16-61ca-432b-bc1a-42d9a4e12031","Type":"ContainerDied","Data":"c981fe18a1377ea6a437d45330dc0b7963cc7073d1d2b4a26b838e1dbc6a75e1"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.411562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f77897559-wqg4q" event={"ID":"fb096f16-61ca-432b-bc1a-42d9a4e12031","Type":"ContainerDied","Data":"3ae08af203b05a9f982704158aa047521f7cae0c62e64f5ab2734e1bbe4b91a1"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.411634 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae08af203b05a9f982704158aa047521f7cae0c62e64f5ab2734e1bbe4b91a1" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.419945 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.424800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fdd8300-935b-4abd-b4c3-2a3894f613ed","Type":"ContainerStarted","Data":"c842d59b005c215c559e63290af46e42e8728b6f20e86890ca67376a421c1f98"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.425971 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.427137 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.434639 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.434618927 podStartE2EDuration="5.434618927s" podCreationTimestamp="2026-02-02 09:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:48.423265507 +0000 UTC m=+1242.278891063" watchObservedRunningTime="2026-02-02 09:16:48.434618927 +0000 UTC m=+1242.290244483" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.444479 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.459101 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" event={"ID":"b9363a36-d6cb-4d9d-b11e-bc62166728bd","Type":"ContainerStarted","Data":"a91821d243d26bcef6530da3764658d30def941e5932acf2b61b755a1ec70b32"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.460647 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.484051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerStarted","Data":"10311d70ad6352264d3bc3ebd756bf6ec5403094a65d8cb6e49e74ba299f9923"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.490031 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="cinder-volume" containerID="cri-o://a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.490908 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="cinder-backup" containerID="cri-o://d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.491132 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="probe" containerID="cri-o://b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.491442 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="cinder-scheduler" containerID="cri-o://bab649082b43531730d16b530a1a8e13da2262a21bf1d9188f569f9817d1ba42" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.491517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc5779b69-676fs" event={"ID":"3287a569-10ab-49e9-bf47-498b14a54b1c","Type":"ContainerStarted","Data":"1caa4a3007e797dc6b8ffebc27596ef273f4cf89ebeca5e0a0d935170ce33c46"} Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.491590 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="probe" containerID="cri-o://9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.491675 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="probe" containerID="cri-o://d53ca2f3e3bd40b35b881371f89a22d4e641c8aab8b4e6702bd2ba80a476743e" gracePeriod=30 Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.491949 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.498873 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-internal-tls-certs\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.499002 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jdlb\" (UniqueName: \"kubernetes.io/projected/fb096f16-61ca-432b-bc1a-42d9a4e12031-kube-api-access-2jdlb\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.499049 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-combined-ca-bundle\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.499104 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-httpd-config\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.499167 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-ovndb-tls-certs\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.499192 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-public-tls-certs\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.499295 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-config\") pod \"fb096f16-61ca-432b-bc1a-42d9a4e12031\" (UID: \"fb096f16-61ca-432b-bc1a-42d9a4e12031\") " Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.514017 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.515022 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb096f16-61ca-432b-bc1a-42d9a4e12031-kube-api-access-2jdlb" (OuterVolumeSpecName: "kube-api-access-2jdlb") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "kube-api-access-2jdlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.517161 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jdlb\" (UniqueName: \"kubernetes.io/projected/fb096f16-61ca-432b-bc1a-42d9a4e12031-kube-api-access-2jdlb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.517182 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.613124 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.613103879 podStartE2EDuration="7.613103879s" podCreationTimestamp="2026-02-02 09:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:48.478647541 +0000 UTC m=+1242.334273097" watchObservedRunningTime="2026-02-02 09:16:48.613103879 +0000 UTC m=+1242.468729445" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.640198 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dc5779b69-676fs" podStartSLOduration=4.640182911 podStartE2EDuration="4.640182911s" podCreationTimestamp="2026-02-02 09:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:48.562547729 +0000 UTC m=+1242.418173295" watchObservedRunningTime="2026-02-02 09:16:48.640182911 +0000 UTC m=+1242.495808457" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.645061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-config" (OuterVolumeSpecName: "config") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.647596 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" podStartSLOduration=5.647582486 podStartE2EDuration="5.647582486s" podCreationTimestamp="2026-02-02 09:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:48.609204156 +0000 UTC m=+1242.464829712" watchObservedRunningTime="2026-02-02 09:16:48.647582486 +0000 UTC m=+1242.503208042" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.670220 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.672128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.716584 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.726582 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.726610 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.726622 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.726631 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.754700 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fb096f16-61ca-432b-bc1a-42d9a4e12031" (UID: "fb096f16-61ca-432b-bc1a-42d9a4e12031"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.791421 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:16:48 crc kubenswrapper[4720]: I0202 09:16:48.828402 4720 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb096f16-61ca-432b-bc1a-42d9a4e12031-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.523569 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerID="8a41f39cc922d4c120179c65f4ae72b24378de8609ad7a5e6c2d389a4edcec45" exitCode=0 Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.523805 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerID="44fe4394bf4705c46aa36a7d141124ad80d2445cfc8486b42c23dd5445d26b99" exitCode=143 Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.523802 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cd0c52a2-cb03-4b51-a0cd-d0beaf320058","Type":"ContainerDied","Data":"8a41f39cc922d4c120179c65f4ae72b24378de8609ad7a5e6c2d389a4edcec45"} Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.523868 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cd0c52a2-cb03-4b51-a0cd-d0beaf320058","Type":"ContainerDied","Data":"44fe4394bf4705c46aa36a7d141124ad80d2445cfc8486b42c23dd5445d26b99"} Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.525245 4720 generic.go:334] "Generic (PLEG): container finished" podID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerID="a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b" exitCode=0 Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.525291 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"12cad3b9-cfe4-4bea-89b3-8cf8ec552906","Type":"ContainerDied","Data":"a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b"} Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.527451 4720 generic.go:334] "Generic (PLEG): container finished" podID="468e2e04-844c-47a2-a554-1fff701d0802" containerID="b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64" exitCode=0 Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.527489 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"468e2e04-844c-47a2-a554-1fff701d0802","Type":"ContainerDied","Data":"b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64"} Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.545332 4720 generic.go:334] "Generic (PLEG): container finished" podID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerID="d53ca2f3e3bd40b35b881371f89a22d4e641c8aab8b4e6702bd2ba80a476743e" exitCode=0 Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.545504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2f53fcb-687a-4a01-9949-6c50248fd792","Type":"ContainerDied","Data":"d53ca2f3e3bd40b35b881371f89a22d4e641c8aab8b4e6702bd2ba80a476743e"} Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.546008 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f77897559-wqg4q" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.641955 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f77897559-wqg4q"] Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.642800 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.656872 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f77897559-wqg4q"] Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.748607 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-scripts\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.748654 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.748690 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-combined-ca-bundle\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.748806 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2p2z\" (UniqueName: \"kubernetes.io/projected/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-kube-api-access-s2p2z\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.748861 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-logs\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.748952 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-etc-machine-id\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.749015 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data-custom\") pod \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\" (UID: \"cd0c52a2-cb03-4b51-a0cd-d0beaf320058\") " Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.758136 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.758197 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.759047 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-logs" (OuterVolumeSpecName: "logs") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.764074 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-kube-api-access-s2p2z" (OuterVolumeSpecName: "kube-api-access-s2p2z") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "kube-api-access-s2p2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.764140 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-scripts" (OuterVolumeSpecName: "scripts") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.791819 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.829240 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data" (OuterVolumeSpecName: "config-data") pod "cd0c52a2-cb03-4b51-a0cd-d0beaf320058" (UID: "cd0c52a2-cb03-4b51-a0cd-d0beaf320058"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855277 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855302 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855312 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855322 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855330 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855338 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:49 crc kubenswrapper[4720]: I0202 09:16:49.855347 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2p2z\" (UniqueName: \"kubernetes.io/projected/cd0c52a2-cb03-4b51-a0cd-d0beaf320058-kube-api-access-s2p2z\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.187498 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271425 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-brick\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271705 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-iscsi\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271746 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-scripts\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271766 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqctt\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-kube-api-access-sqctt\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-combined-ca-bundle\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271823 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-machine-id\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271839 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-lib-modules\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271894 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-dev\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-nvme\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271946 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-lib-cinder\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271961 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-run\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271977 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.272006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-ceph\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.272050 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data-custom\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.272095 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-cinder\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.272113 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-sys\") pod \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\" (UID: \"12cad3b9-cfe4-4bea-89b3-8cf8ec552906\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.271515 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.272573 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-sys" (OuterVolumeSpecName: "sys") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.272614 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.273227 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.273260 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.273280 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-dev" (OuterVolumeSpecName: "dev") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.273295 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.273727 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-run" (OuterVolumeSpecName: "run") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.273988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.274055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.281554 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-kube-api-access-sqctt" (OuterVolumeSpecName: "kube-api-access-sqctt") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "kube-api-access-sqctt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.285387 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-ceph" (OuterVolumeSpecName: "ceph") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.285522 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-scripts" (OuterVolumeSpecName: "scripts") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.288400 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.289355 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.341273 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379051 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-lib-cinder\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379107 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-brick\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379143 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-cinder\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379169 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-nvme\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379357 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-machine-id\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379466 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data-custom\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.379495 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-iscsi\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380131 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-ceph\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380171 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-dev\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380194 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-lib-modules\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380233 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-run\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380367 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-sys\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380395 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s8h8\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-kube-api-access-8s8h8\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.380410 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-scripts\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.381267 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382490 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382509 4720 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-sys\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382519 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382529 4720 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382536 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382545 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqctt\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-kube-api-access-sqctt\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382554 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382562 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382570 4720 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382578 4720 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-dev\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382592 4720 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382601 4720 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382609 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382618 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.382626 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383062 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383098 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383116 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383134 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383318 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383382 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383403 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-run" (OuterVolumeSpecName: "run") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383425 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-dev" (OuterVolumeSpecName: "dev") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.383448 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.387989 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-sys" (OuterVolumeSpecName: "sys") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.391052 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.400078 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-ceph" (OuterVolumeSpecName: "ceph") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.400125 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-kube-api-access-8s8h8" (OuterVolumeSpecName: "kube-api-access-8s8h8") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "kube-api-access-8s8h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.405061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-scripts" (OuterVolumeSpecName: "scripts") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.437828 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-86d4c4b4d8-gbbkh" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.459383 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data" (OuterVolumeSpecName: "config-data") pod "12cad3b9-cfe4-4bea-89b3-8cf8ec552906" (UID: "12cad3b9-cfe4-4bea-89b3-8cf8ec552906"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.489605 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.490939 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle\") pod \"468e2e04-844c-47a2-a554-1fff701d0802\" (UID: \"468e2e04-844c-47a2-a554-1fff701d0802\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491421 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491437 4720 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491448 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491456 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491465 4720 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491473 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491481 4720 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-dev\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491488 4720 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491498 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12cad3b9-cfe4-4bea-89b3-8cf8ec552906-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491505 4720 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491514 4720 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-sys\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491523 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s8h8\" (UniqueName: \"kubernetes.io/projected/468e2e04-844c-47a2-a554-1fff701d0802-kube-api-access-8s8h8\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491531 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491539 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.491546 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/468e2e04-844c-47a2-a554-1fff701d0802-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: W0202 09:16:50.492584 4720 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/468e2e04-844c-47a2-a554-1fff701d0802/volumes/kubernetes.io~secret/combined-ca-bundle Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.492604 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.523815 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b896b6bb4-gxblv"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.524418 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon-log" containerID="cri-o://7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269" gracePeriod=30 Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.524782 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" containerID="cri-o://cfa0d360cae26b0c2a8dc8dbb5704822a8b671b94feadfbd85eda5647e826c27" gracePeriod=30 Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.534021 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.584928 4720 generic.go:334] "Generic (PLEG): container finished" podID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerID="9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8" exitCode=0 Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.584996 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"12cad3b9-cfe4-4bea-89b3-8cf8ec552906","Type":"ContainerDied","Data":"9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.585023 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"12cad3b9-cfe4-4bea-89b3-8cf8ec552906","Type":"ContainerDied","Data":"085c6eec5b8a8084e7a82b0b2c56021f05c9354b1a64d813b3c9b9cec4a8cb01"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.585038 4720 scope.go:117] "RemoveContainer" containerID="9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.585164 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.595852 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.600726 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.601590 4720 generic.go:334] "Generic (PLEG): container finished" podID="468e2e04-844c-47a2-a554-1fff701d0802" containerID="d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310" exitCode=0 Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.601647 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"468e2e04-844c-47a2-a554-1fff701d0802","Type":"ContainerDied","Data":"d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.601670 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"468e2e04-844c-47a2-a554-1fff701d0802","Type":"ContainerDied","Data":"0857059f83d290e655a58161c20492be9d15b8c9711a8ba861969a10aff839ce"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.601720 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.610794 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerStarted","Data":"f8d1bffd3e986a5115ba8117bd84225d10b3385a7b6d94122747e828ddf583d0"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.612159 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.619165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data" (OuterVolumeSpecName: "config-data") pod "468e2e04-844c-47a2-a554-1fff701d0802" (UID: "468e2e04-844c-47a2-a554-1fff701d0802"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.633362 4720 generic.go:334] "Generic (PLEG): container finished" podID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerID="bab649082b43531730d16b530a1a8e13da2262a21bf1d9188f569f9817d1ba42" exitCode=0 Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.633426 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2f53fcb-687a-4a01-9949-6c50248fd792","Type":"ContainerDied","Data":"bab649082b43531730d16b530a1a8e13da2262a21bf1d9188f569f9817d1ba42"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.633507 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.642259 4720 scope.go:117] "RemoveContainer" containerID="a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.649600 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.652765 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cd0c52a2-cb03-4b51-a0cd-d0beaf320058","Type":"ContainerDied","Data":"3c6a8c86547f85c49627e724036e83bdfa4976aaf29ad74e3b860792f29ebe21"} Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.691127 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.697208 4720 scope.go:117] "RemoveContainer" containerID="9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.701484 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data\") pod \"d2f53fcb-687a-4a01-9949-6c50248fd792\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.701523 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-scripts\") pod \"d2f53fcb-687a-4a01-9949-6c50248fd792\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.701583 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2f53fcb-687a-4a01-9949-6c50248fd792-etc-machine-id\") pod \"d2f53fcb-687a-4a01-9949-6c50248fd792\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.701635 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data-custom\") pod \"d2f53fcb-687a-4a01-9949-6c50248fd792\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.701720 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-combined-ca-bundle\") pod \"d2f53fcb-687a-4a01-9949-6c50248fd792\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.701845 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnsz6\" (UniqueName: \"kubernetes.io/projected/d2f53fcb-687a-4a01-9949-6c50248fd792-kube-api-access-tnsz6\") pod \"d2f53fcb-687a-4a01-9949-6c50248fd792\" (UID: \"d2f53fcb-687a-4a01-9949-6c50248fd792\") " Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.702446 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468e2e04-844c-47a2-a554-1fff701d0802-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.704636 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f53fcb-687a-4a01-9949-6c50248fd792-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d2f53fcb-687a-4a01-9949-6c50248fd792" (UID: "d2f53fcb-687a-4a01-9949-6c50248fd792"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.707045 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8\": container with ID starting with 9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8 not found: ID does not exist" containerID="9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.707092 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8"} err="failed to get container status \"9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8\": rpc error: code = NotFound desc = could not find container \"9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8\": container with ID starting with 9b312af40f0fef75ebd3454a94f05e56c10853b0d031d7d3013faf64b9b5c3b8 not found: ID does not exist" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.707118 4720 scope.go:117] "RemoveContainer" containerID="a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.707775 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b\": container with ID starting with a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b not found: ID does not exist" containerID="a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.707793 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b"} err="failed to get container status \"a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b\": rpc error: code = NotFound desc = could not find container \"a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b\": container with ID starting with a1ddaa886779afc72089ad27da0e48777ca827dae537153e97a18e537d313e0b not found: ID does not exist" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.707805 4720 scope.go:117] "RemoveContainer" containerID="b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.707858 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.715626 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-scripts" (OuterVolumeSpecName: "scripts") pod "d2f53fcb-687a-4a01-9949-6c50248fd792" (UID: "d2f53fcb-687a-4a01-9949-6c50248fd792"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.718617 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d2f53fcb-687a-4a01-9949-6c50248fd792" (UID: "d2f53fcb-687a-4a01-9949-6c50248fd792"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.725624 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f53fcb-687a-4a01-9949-6c50248fd792-kube-api-access-tnsz6" (OuterVolumeSpecName: "kube-api-access-tnsz6") pod "d2f53fcb-687a-4a01-9949-6c50248fd792" (UID: "d2f53fcb-687a-4a01-9949-6c50248fd792"). InnerVolumeSpecName "kube-api-access-tnsz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.730934 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731416 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api-log" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731445 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api-log" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731457 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731464 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731471 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-api" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731476 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-api" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731488 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="cinder-volume" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731495 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="cinder-volume" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731526 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="cinder-backup" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731532 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="cinder-backup" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731543 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731549 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731557 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" containerName="dnsmasq-dns" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731563 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" containerName="dnsmasq-dns" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731576 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731597 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731608 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731614 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731629 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-httpd" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731635 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-httpd" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731649 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="cinder-scheduler" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731655 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="cinder-scheduler" Feb 02 09:16:50 crc kubenswrapper[4720]: E0202 09:16:50.731685 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" containerName="init" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731694 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" containerName="init" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731923 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-httpd" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731937 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="cinder-scheduler" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731946 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api-log" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731973 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f54a575-b00e-4748-ab42-499cf997a92c" containerName="dnsmasq-dns" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731982 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.731991 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.732002 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="468e2e04-844c-47a2-a554-1fff701d0802" containerName="cinder-backup" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.732009 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="probe" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.732018 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" containerName="neutron-api" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.732034 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" containerName="cinder-volume" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.732062 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" containerName="manila-api" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.733306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.737515 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.762581 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.764403 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7321564289999998 podStartE2EDuration="9.764381846s" podCreationTimestamp="2026-02-02 09:16:41 +0000 UTC" firstStartedPulling="2026-02-02 09:16:42.834819783 +0000 UTC m=+1236.690445339" lastFinishedPulling="2026-02-02 09:16:49.8670452 +0000 UTC m=+1243.722670756" observedRunningTime="2026-02-02 09:16:50.707522698 +0000 UTC m=+1244.563148254" watchObservedRunningTime="2026-02-02 09:16:50.764381846 +0000 UTC m=+1244.620007402" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.787868 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2f53fcb-687a-4a01-9949-6c50248fd792" (UID: "d2f53fcb-687a-4a01-9949-6c50248fd792"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804521 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804572 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h46g\" (UniqueName: \"kubernetes.io/projected/c19bbd5c-8368-477b-8014-e1de85c9abb2-kube-api-access-9h46g\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804615 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804635 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804658 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-run\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804704 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804728 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804769 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c19bbd5c-8368-477b-8014-e1de85c9abb2-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804834 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804922 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804940 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.804969 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.805027 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.805037 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnsz6\" (UniqueName: \"kubernetes.io/projected/d2f53fcb-687a-4a01-9949-6c50248fd792-kube-api-access-tnsz6\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.805047 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.805057 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2f53fcb-687a-4a01-9949-6c50248fd792-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.805066 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.809242 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.827694 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.842952 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.844590 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.847240 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.847591 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.847787 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.857522 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data" (OuterVolumeSpecName: "config-data") pod "d2f53fcb-687a-4a01-9949-6c50248fd792" (UID: "d2f53fcb-687a-4a01-9949-6c50248fd792"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.906679 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12cad3b9-cfe4-4bea-89b3-8cf8ec552906" path="/var/lib/kubelet/pods/12cad3b9-cfe4-4bea-89b3-8cf8ec552906/volumes" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.906922 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-run\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.906990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-etc-machine-id\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907032 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907051 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907070 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-logs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907072 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-run\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907088 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907184 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907221 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907317 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0c52a2-cb03-4b51-a0cd-d0beaf320058" path="/var/lib/kubelet/pods/cd0c52a2-cb03-4b51-a0cd-d0beaf320058/volumes" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907399 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c19bbd5c-8368-477b-8014-e1de85c9abb2-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-config-data\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907507 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907590 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-config-data-custom\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907731 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907772 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907788 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907819 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907897 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb096f16-61ca-432b-bc1a-42d9a4e12031" path="/var/lib/kubelet/pods/fb096f16-61ca-432b-bc1a-42d9a4e12031/volumes" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907901 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.907939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908605 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908610 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-scripts\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908617 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h46g\" (UniqueName: \"kubernetes.io/projected/c19bbd5c-8368-477b-8014-e1de85c9abb2-kube-api-access-9h46g\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908831 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89td\" (UniqueName: \"kubernetes.io/projected/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-kube-api-access-j89td\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908848 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908973 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-public-tls-certs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.908997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.909040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.909147 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2f53fcb-687a-4a01-9949-6c50248fd792-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.909182 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c19bbd5c-8368-477b-8014-e1de85c9abb2-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.910347 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.911195 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c19bbd5c-8368-477b-8014-e1de85c9abb2-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.914533 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.916768 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.918538 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.931232 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h46g\" (UniqueName: \"kubernetes.io/projected/c19bbd5c-8368-477b-8014-e1de85c9abb2-kube-api-access-9h46g\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:50 crc kubenswrapper[4720]: I0202 09:16:50.935028 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c19bbd5c-8368-477b-8014-e1de85c9abb2-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c19bbd5c-8368-477b-8014-e1de85c9abb2\") " pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.010796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-logs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.010863 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-config-data\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.010915 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.010941 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.010989 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-config-data-custom\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.011070 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-scripts\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.011113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89td\" (UniqueName: \"kubernetes.io/projected/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-kube-api-access-j89td\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.011167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-public-tls-certs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.011225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-etc-machine-id\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.011334 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-etc-machine-id\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.014601 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-logs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.015216 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-config-data\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.021275 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-config-data-custom\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.023203 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.026859 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-public-tls-certs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.036216 4720 scope.go:117] "RemoveContainer" containerID="d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.039171 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89td\" (UniqueName: \"kubernetes.io/projected/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-kube-api-access-j89td\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.039403 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.044548 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5ebc33-71db-45cf-be50-fa0b92d38d7f-scripts\") pod \"manila-api-0\" (UID: \"0d5ebc33-71db-45cf-be50-fa0b92d38d7f\") " pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.048647 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.049748 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.062833 4720 scope.go:117] "RemoveContainer" containerID="b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64" Feb 02 09:16:51 crc kubenswrapper[4720]: E0202 09:16:51.063257 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64\": container with ID starting with b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64 not found: ID does not exist" containerID="b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.063299 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64"} err="failed to get container status \"b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64\": rpc error: code = NotFound desc = could not find container \"b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64\": container with ID starting with b4e5290daabb7a971d52ce929365555f0f20f0b454e5560c550c9ffb92f9be64 not found: ID does not exist" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.063324 4720 scope.go:117] "RemoveContainer" containerID="d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310" Feb 02 09:16:51 crc kubenswrapper[4720]: E0202 09:16:51.066212 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310\": container with ID starting with d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310 not found: ID does not exist" containerID="d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.066247 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310"} err="failed to get container status \"d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310\": rpc error: code = NotFound desc = could not find container \"d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310\": container with ID starting with d75514b9f822d9640b4b273a16779962e2cf5cd800b730b6a4484dbb655d1310 not found: ID does not exist" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.066268 4720 scope.go:117] "RemoveContainer" containerID="d53ca2f3e3bd40b35b881371f89a22d4e641c8aab8b4e6702bd2ba80a476743e" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.069284 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.077028 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.099501 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.135345 4720 scope.go:117] "RemoveContainer" containerID="bab649082b43531730d16b530a1a8e13da2262a21bf1d9188f569f9817d1ba42" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.152733 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.170698 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.179450 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.181637 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.191177 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.193429 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.193590 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.195565 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.199898 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216830 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216869 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3fbac84-aac7-4288-a400-7cb5931f2c2a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216908 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-dev\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216933 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jjl\" (UniqueName: \"kubernetes.io/projected/b3fbac84-aac7-4288-a400-7cb5931f2c2a-kube-api-access-c8jjl\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216950 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.216967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4ddf88e6-513d-474a-bf5d-82806004a740-ceph\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217013 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-scripts\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-lib-modules\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217076 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217092 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217110 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217130 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217144 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217159 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217195 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217218 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-run\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217239 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-config-data\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217419 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-sys\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.217540 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mc9\" (UniqueName: \"kubernetes.io/projected/4ddf88e6-513d-474a-bf5d-82806004a740-kube-api-access-r6mc9\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.264756 4720 scope.go:117] "RemoveContainer" containerID="8a41f39cc922d4c120179c65f4ae72b24378de8609ad7a5e6c2d389a4edcec45" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4ddf88e6-513d-474a-bf5d-82806004a740-ceph\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-scripts\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322314 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-lib-modules\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322333 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322362 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322394 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.324831 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.324945 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.325006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-lib-modules\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.325563 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.322412 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-nvme\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.328992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329013 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329087 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329123 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-run\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329152 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-config-data\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329170 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-sys\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mc9\" (UniqueName: \"kubernetes.io/projected/4ddf88e6-513d-474a-bf5d-82806004a740-kube-api-access-r6mc9\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329309 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3fbac84-aac7-4288-a400-7cb5931f2c2a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329328 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-dev\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329363 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jjl\" (UniqueName: \"kubernetes.io/projected/b3fbac84-aac7-4288-a400-7cb5931f2c2a-kube-api-access-c8jjl\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.329384 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.330710 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.330765 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.330808 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.331178 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-run\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.331871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-scripts\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.331970 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-dev\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.331996 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b3fbac84-aac7-4288-a400-7cb5931f2c2a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.332148 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4ddf88e6-513d-474a-bf5d-82806004a740-sys\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.333626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.337666 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.338029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4ddf88e6-513d-474a-bf5d-82806004a740-ceph\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.338746 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-config-data\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.339210 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-config-data\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.339390 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ddf88e6-513d-474a-bf5d-82806004a740-config-data-custom\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.339736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-scripts\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.344846 4720 scope.go:117] "RemoveContainer" containerID="44fe4394bf4705c46aa36a7d141124ad80d2445cfc8486b42c23dd5445d26b99" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.346131 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mc9\" (UniqueName: \"kubernetes.io/projected/4ddf88e6-513d-474a-bf5d-82806004a740-kube-api-access-r6mc9\") pod \"cinder-backup-0\" (UID: \"4ddf88e6-513d-474a-bf5d-82806004a740\") " pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.348804 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3fbac84-aac7-4288-a400-7cb5931f2c2a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.354810 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jjl\" (UniqueName: \"kubernetes.io/projected/b3fbac84-aac7-4288-a400-7cb5931f2c2a-kube-api-access-c8jjl\") pod \"cinder-scheduler-0\" (UID: \"b3fbac84-aac7-4288-a400-7cb5931f2c2a\") " pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.516502 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.547401 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 09:16:51 crc kubenswrapper[4720]: I0202 09:16:51.896157 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.094032 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.125094 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.229362 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.430232 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.440445 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.702868 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4ddf88e6-513d-474a-bf5d-82806004a740","Type":"ContainerStarted","Data":"531b2389c297969c101e8ead2002aca53bde303bf6ec39c4ef15185e5c89729c"} Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.702936 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4ddf88e6-513d-474a-bf5d-82806004a740","Type":"ContainerStarted","Data":"0f62af9c218e4f4a66b13406f774b0387f124e64c99d188f06149c85dc6dc755"} Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.710381 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3fbac84-aac7-4288-a400-7cb5931f2c2a","Type":"ContainerStarted","Data":"db9541ec984aaa6e28871618d96ad7edf374e63e8badf9ece2bff150602d50e9"} Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.742265 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d5ebc33-71db-45cf-be50-fa0b92d38d7f","Type":"ContainerStarted","Data":"f76b36cd1444512861c86a16f703f8b1a3205eb4cdceb981c99a730b807417b7"} Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.764181 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c8cc9866d-t5g2d"] Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.766493 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.788932 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8cc9866d-t5g2d"] Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.801770 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c19bbd5c-8368-477b-8014-e1de85c9abb2","Type":"ContainerStarted","Data":"db3451a1142ef5762f620a97fe356fa8cdef09c44f98cc741e035b72355261a8"} Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.801836 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c19bbd5c-8368-477b-8014-e1de85c9abb2","Type":"ContainerStarted","Data":"314c828e04a7d7c8dd1520aca947e61eb2b18516177147f5b692a9e7e990798e"} Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.907081 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-public-tls-certs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.907446 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ab26f9-1b43-4280-a23c-0124dc1cd945-logs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.907769 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxj8z\" (UniqueName: \"kubernetes.io/projected/a4ab26f9-1b43-4280-a23c-0124dc1cd945-kube-api-access-cxj8z\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.907952 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-config-data\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.908064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-internal-tls-certs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.908176 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-combined-ca-bundle\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.908346 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-scripts\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.945147 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="468e2e04-844c-47a2-a554-1fff701d0802" path="/var/lib/kubelet/pods/468e2e04-844c-47a2-a554-1fff701d0802/volumes" Feb 02 09:16:52 crc kubenswrapper[4720]: I0202 09:16:52.945759 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f53fcb-687a-4a01-9949-6c50248fd792" path="/var/lib/kubelet/pods/d2f53fcb-687a-4a01-9949-6c50248fd792/volumes" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.010164 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-combined-ca-bundle\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.010434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-scripts\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.010645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-public-tls-certs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.010785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ab26f9-1b43-4280-a23c-0124dc1cd945-logs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.010942 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj8z\" (UniqueName: \"kubernetes.io/projected/a4ab26f9-1b43-4280-a23c-0124dc1cd945-kube-api-access-cxj8z\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.011081 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-config-data\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.011188 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-internal-tls-certs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.011698 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ab26f9-1b43-4280-a23c-0124dc1cd945-logs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.015983 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-internal-tls-certs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.016612 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-combined-ca-bundle\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.017619 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-config-data\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.030609 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-scripts\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.031134 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ab26f9-1b43-4280-a23c-0124dc1cd945-public-tls-certs\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.035548 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj8z\" (UniqueName: \"kubernetes.io/projected/a4ab26f9-1b43-4280-a23c-0124dc1cd945-kube-api-access-cxj8z\") pod \"placement-c8cc9866d-t5g2d\" (UID: \"a4ab26f9-1b43-4280-a23c-0124dc1cd945\") " pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.142353 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.359920 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.751709 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.838107 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c19bbd5c-8368-477b-8014-e1de85c9abb2","Type":"ContainerStarted","Data":"1cc94e68671b5ca923ac0fd19aeb342fbe481971403d5b352bfbd3689e142384"} Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.840697 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"4ddf88e6-513d-474a-bf5d-82806004a740","Type":"ContainerStarted","Data":"8be9a1529a7718d824caaa7eb90f1bf49fee27803331d3e026fbdd5a91a31486"} Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.842399 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3fbac84-aac7-4288-a400-7cb5931f2c2a","Type":"ContainerStarted","Data":"fb64e31de60a295089c9d047984c853ec3bffb3edb475d473c853ce4898c03f1"} Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.864018 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d5ebc33-71db-45cf-be50-fa0b92d38d7f","Type":"ContainerStarted","Data":"a7490c382a8dd0b02a7203a21c94f34d36b90c6a3ce831ffa5407f76af1fc0b8"} Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.864751 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-kthqk"] Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.878103 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerName="dnsmasq-dns" containerID="cri-o://89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef" gracePeriod=10 Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.901722 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c8cc9866d-t5g2d"] Feb 02 09:16:53 crc kubenswrapper[4720]: I0202 09:16:53.933218 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.93320104 podStartE2EDuration="3.93320104s" podCreationTimestamp="2026-02-02 09:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:53.890407336 +0000 UTC m=+1247.746032892" watchObservedRunningTime="2026-02-02 09:16:53.93320104 +0000 UTC m=+1247.788826596" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.027807 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.027788394 podStartE2EDuration="3.027788394s" podCreationTimestamp="2026-02-02 09:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:53.973259641 +0000 UTC m=+1247.828885197" watchObservedRunningTime="2026-02-02 09:16:54.027788394 +0000 UTC m=+1247.883413950" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.685505 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.801401 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-nb\") pod \"79be7bd6-1571-415f-b5ef-f481ab24089b\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.801618 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-config\") pod \"79be7bd6-1571-415f-b5ef-f481ab24089b\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.801652 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkmpd\" (UniqueName: \"kubernetes.io/projected/79be7bd6-1571-415f-b5ef-f481ab24089b-kube-api-access-lkmpd\") pod \"79be7bd6-1571-415f-b5ef-f481ab24089b\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.801681 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-swift-storage-0\") pod \"79be7bd6-1571-415f-b5ef-f481ab24089b\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.801705 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-svc\") pod \"79be7bd6-1571-415f-b5ef-f481ab24089b\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.801756 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-sb\") pod \"79be7bd6-1571-415f-b5ef-f481ab24089b\" (UID: \"79be7bd6-1571-415f-b5ef-f481ab24089b\") " Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.820263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79be7bd6-1571-415f-b5ef-f481ab24089b-kube-api-access-lkmpd" (OuterVolumeSpecName: "kube-api-access-lkmpd") pod "79be7bd6-1571-415f-b5ef-f481ab24089b" (UID: "79be7bd6-1571-415f-b5ef-f481ab24089b"). InnerVolumeSpecName "kube-api-access-lkmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.905387 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkmpd\" (UniqueName: \"kubernetes.io/projected/79be7bd6-1571-415f-b5ef-f481ab24089b-kube-api-access-lkmpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.906743 4720 generic.go:334] "Generic (PLEG): container finished" podID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerID="89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef" exitCode=0 Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.906825 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.933703 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79be7bd6-1571-415f-b5ef-f481ab24089b" (UID: "79be7bd6-1571-415f-b5ef-f481ab24089b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.970721 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9706988 podStartE2EDuration="3.9706988s" podCreationTimestamp="2026-02-02 09:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:54.939386798 +0000 UTC m=+1248.795012354" watchObservedRunningTime="2026-02-02 09:16:54.9706988 +0000 UTC m=+1248.826324356" Feb 02 09:16:54 crc kubenswrapper[4720]: I0202 09:16:54.982727 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79be7bd6-1571-415f-b5ef-f481ab24089b" (UID: "79be7bd6-1571-415f-b5ef-f481ab24089b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.007129 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.007161 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.009257 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.009241554 podStartE2EDuration="5.009241554s" podCreationTimestamp="2026-02-02 09:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:54.983239907 +0000 UTC m=+1248.838865463" watchObservedRunningTime="2026-02-02 09:16:55.009241554 +0000 UTC m=+1248.864867110" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013183 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" event={"ID":"79be7bd6-1571-415f-b5ef-f481ab24089b","Type":"ContainerDied","Data":"89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013227 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-kthqk" event={"ID":"79be7bd6-1571-415f-b5ef-f481ab24089b","Type":"ContainerDied","Data":"9ec2452d9a48dec826ed475416969145a4d3f41a817e2f366a8ae2c7439c61dc"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013262 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013274 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b3fbac84-aac7-4288-a400-7cb5931f2c2a","Type":"ContainerStarted","Data":"397da6de573d25391036458ab155f5d4585e9185aa57f8d769a85830a556482a"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013288 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8cc9866d-t5g2d" event={"ID":"a4ab26f9-1b43-4280-a23c-0124dc1cd945","Type":"ContainerStarted","Data":"3e78717a614e18ec6d2d854613a98c1d5a77999e8aa4b7ba4b55231dd342feb5"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013299 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8cc9866d-t5g2d" event={"ID":"a4ab26f9-1b43-4280-a23c-0124dc1cd945","Type":"ContainerStarted","Data":"6ca2bd0dc939cc0da4859d77564e09816d2fba9a03993accf7e615433ad3811e"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013307 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d5ebc33-71db-45cf-be50-fa0b92d38d7f","Type":"ContainerStarted","Data":"9385d908dd7c08a32a62f50c8ab3b5fa890dde20b3f9af9b55b4a5ac308feabd"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.013330 4720 scope.go:117] "RemoveContainer" containerID="89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.014851 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.017277 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79be7bd6-1571-415f-b5ef-f481ab24089b" (UID: "79be7bd6-1571-415f-b5ef-f481ab24089b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.054782 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:44078->10.217.0.151:8443: read: connection reset by peer" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.055475 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.084265 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79be7bd6-1571-415f-b5ef-f481ab24089b" (UID: "79be7bd6-1571-415f-b5ef-f481ab24089b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.107116 4720 scope.go:117] "RemoveContainer" containerID="aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.108029 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.108042 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.108055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-config" (OuterVolumeSpecName: "config") pod "79be7bd6-1571-415f-b5ef-f481ab24089b" (UID: "79be7bd6-1571-415f-b5ef-f481ab24089b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.211182 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79be7bd6-1571-415f-b5ef-f481ab24089b-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.253841 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-kthqk"] Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.276846 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-kthqk"] Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.278933 4720 scope.go:117] "RemoveContainer" containerID="89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef" Feb 02 09:16:55 crc kubenswrapper[4720]: E0202 09:16:55.281056 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef\": container with ID starting with 89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef not found: ID does not exist" containerID="89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.281103 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef"} err="failed to get container status \"89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef\": rpc error: code = NotFound desc = could not find container \"89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef\": container with ID starting with 89f9b2db042e2d79fed555779c2efd22cc14e4091d53e98337689bab87b457ef not found: ID does not exist" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.281131 4720 scope.go:117] "RemoveContainer" containerID="aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31" Feb 02 09:16:55 crc kubenswrapper[4720]: E0202 09:16:55.285997 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31\": container with ID starting with aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31 not found: ID does not exist" containerID="aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.286028 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31"} err="failed to get container status \"aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31\": rpc error: code = NotFound desc = could not find container \"aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31\": container with ID starting with aebe6721ab7fe213f8d5ac7fdd8954f6d9f1c6eb77b9c01a54957b5eb5142b31 not found: ID does not exist" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.754495 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.968619 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c8cc9866d-t5g2d" event={"ID":"a4ab26f9-1b43-4280-a23c-0124dc1cd945","Type":"ContainerStarted","Data":"19c1bda8d044827bbc199ecb44d04c935e6b7113f2924734689a2e33a4064feb"} Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.972058 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:55 crc kubenswrapper[4720]: I0202 09:16:55.972100 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.001661 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c8cc9866d-t5g2d" podStartSLOduration=4.001641055 podStartE2EDuration="4.001641055s" podCreationTimestamp="2026-02-02 09:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:16:55.988259867 +0000 UTC m=+1249.843885423" watchObservedRunningTime="2026-02-02 09:16:56.001641055 +0000 UTC m=+1249.857266611" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.007604 4720 generic.go:334] "Generic (PLEG): container finished" podID="e7ea3e29-f479-4d19-9200-476ab329c100" containerID="cfa0d360cae26b0c2a8dc8dbb5704822a8b671b94feadfbd85eda5647e826c27" exitCode=0 Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.008583 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b896b6bb4-gxblv" event={"ID":"e7ea3e29-f479-4d19-9200-476ab329c100","Type":"ContainerDied","Data":"cfa0d360cae26b0c2a8dc8dbb5704822a8b671b94feadfbd85eda5647e826c27"} Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.050157 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.517153 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.548128 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.583141 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-754d8f7774-zcmq5" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.627076 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-57f5dcffbd-gvpfb" Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.640862 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7db56589cb-hzwrj"] Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.641093 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7db56589cb-hzwrj" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api-log" containerID="cri-o://0a9be0241df1a16f26b27e8b87dc94bdbd3545abeba84b765a69bafd98906c59" gracePeriod=30 Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.641221 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7db56589cb-hzwrj" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api" containerID="cri-o://0499c590610fc5cb47e1f89d7ef38de9c057dc7e428daed25d7bb0eb40ddd90d" gracePeriod=30 Feb 02 09:16:56 crc kubenswrapper[4720]: I0202 09:16:56.948103 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" path="/var/lib/kubelet/pods/79be7bd6-1571-415f-b5ef-f481ab24089b/volumes" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.067140 4720 generic.go:334] "Generic (PLEG): container finished" podID="36847140-0ea3-4683-a408-8563e20a543a" containerID="0a9be0241df1a16f26b27e8b87dc94bdbd3545abeba84b765a69bafd98906c59" exitCode=143 Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.068059 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7db56589cb-hzwrj" event={"ID":"36847140-0ea3-4683-a408-8563e20a543a","Type":"ContainerDied","Data":"0a9be0241df1a16f26b27e8b87dc94bdbd3545abeba84b765a69bafd98906c59"} Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.620470 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 09:16:57 crc kubenswrapper[4720]: E0202 09:16:57.621162 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerName="init" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.621184 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerName="init" Feb 02 09:16:57 crc kubenswrapper[4720]: E0202 09:16:57.621203 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerName="dnsmasq-dns" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.621211 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerName="dnsmasq-dns" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.621418 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="79be7bd6-1571-415f-b5ef-f481ab24089b" containerName="dnsmasq-dns" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.622215 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.623806 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-v4ksz" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.624709 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.625329 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.646940 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.763831 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50a6a2b-2c12-435d-801c-f97f65cf36f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.763885 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f50a6a2b-2c12-435d-801c-f97f65cf36f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.763911 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6jq\" (UniqueName: \"kubernetes.io/projected/f50a6a2b-2c12-435d-801c-f97f65cf36f9-kube-api-access-hj6jq\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.763949 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f50a6a2b-2c12-435d-801c-f97f65cf36f9-openstack-config\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.865602 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50a6a2b-2c12-435d-801c-f97f65cf36f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.865658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f50a6a2b-2c12-435d-801c-f97f65cf36f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.865681 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6jq\" (UniqueName: \"kubernetes.io/projected/f50a6a2b-2c12-435d-801c-f97f65cf36f9-kube-api-access-hj6jq\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.865719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f50a6a2b-2c12-435d-801c-f97f65cf36f9-openstack-config\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.866680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f50a6a2b-2c12-435d-801c-f97f65cf36f9-openstack-config\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.878656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50a6a2b-2c12-435d-801c-f97f65cf36f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.881905 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f50a6a2b-2c12-435d-801c-f97f65cf36f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.895274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6jq\" (UniqueName: \"kubernetes.io/projected/f50a6a2b-2c12-435d-801c-f97f65cf36f9-kube-api-access-hj6jq\") pod \"openstackclient\" (UID: \"f50a6a2b-2c12-435d-801c-f97f65cf36f9\") " pod="openstack/openstackclient" Feb 02 09:16:57 crc kubenswrapper[4720]: I0202 09:16:57.942440 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 09:16:58 crc kubenswrapper[4720]: I0202 09:16:58.510654 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 09:17:00 crc kubenswrapper[4720]: I0202 09:17:00.122607 4720 generic.go:334] "Generic (PLEG): container finished" podID="36847140-0ea3-4683-a408-8563e20a543a" containerID="0499c590610fc5cb47e1f89d7ef38de9c057dc7e428daed25d7bb0eb40ddd90d" exitCode=0 Feb 02 09:17:00 crc kubenswrapper[4720]: I0202 09:17:00.122698 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7db56589cb-hzwrj" event={"ID":"36847140-0ea3-4683-a408-8563e20a543a","Type":"ContainerDied","Data":"0499c590610fc5cb47e1f89d7ef38de9c057dc7e428daed25d7bb0eb40ddd90d"} Feb 02 09:17:00 crc kubenswrapper[4720]: I0202 09:17:00.442802 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7db56589cb-hzwrj" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Feb 02 09:17:00 crc kubenswrapper[4720]: I0202 09:17:00.442933 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7db56589cb-hzwrj" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.132169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f50a6a2b-2c12-435d-801c-f97f65cf36f9","Type":"ContainerStarted","Data":"c4760a8019ab214cfddbbb0b24bbb783dc211ce8aaef43a5ceeeab7c6b33ca13"} Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.185869 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.295970 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.366873 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36847140-0ea3-4683-a408-8563e20a543a-logs\") pod \"36847140-0ea3-4683-a408-8563e20a543a\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.367130 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-combined-ca-bundle\") pod \"36847140-0ea3-4683-a408-8563e20a543a\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.367234 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data-custom\") pod \"36847140-0ea3-4683-a408-8563e20a543a\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.367383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2ws\" (UniqueName: \"kubernetes.io/projected/36847140-0ea3-4683-a408-8563e20a543a-kube-api-access-sj2ws\") pod \"36847140-0ea3-4683-a408-8563e20a543a\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.367563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data\") pod \"36847140-0ea3-4683-a408-8563e20a543a\" (UID: \"36847140-0ea3-4683-a408-8563e20a543a\") " Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.372232 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "36847140-0ea3-4683-a408-8563e20a543a" (UID: "36847140-0ea3-4683-a408-8563e20a543a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.380143 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36847140-0ea3-4683-a408-8563e20a543a-kube-api-access-sj2ws" (OuterVolumeSpecName: "kube-api-access-sj2ws") pod "36847140-0ea3-4683-a408-8563e20a543a" (UID: "36847140-0ea3-4683-a408-8563e20a543a"). InnerVolumeSpecName "kube-api-access-sj2ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.380543 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36847140-0ea3-4683-a408-8563e20a543a-logs" (OuterVolumeSpecName: "logs") pod "36847140-0ea3-4683-a408-8563e20a543a" (UID: "36847140-0ea3-4683-a408-8563e20a543a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.388763 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d89bf9699-kpnnn"] Feb 02 09:17:01 crc kubenswrapper[4720]: E0202 09:17:01.395583 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.395609 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api" Feb 02 09:17:01 crc kubenswrapper[4720]: E0202 09:17:01.395625 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api-log" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.395632 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api-log" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.395830 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.395849 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="36847140-0ea3-4683-a408-8563e20a543a" containerName="barbican-api-log" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.396785 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.405325 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.405525 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.405680 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.409494 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36847140-0ea3-4683-a408-8563e20a543a" (UID: "36847140-0ea3-4683-a408-8563e20a543a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.417085 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d89bf9699-kpnnn"] Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.430074 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data" (OuterVolumeSpecName: "config-data") pod "36847140-0ea3-4683-a408-8563e20a543a" (UID: "36847140-0ea3-4683-a408-8563e20a543a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.471270 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.471502 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36847140-0ea3-4683-a408-8563e20a543a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.471603 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.471724 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36847140-0ea3-4683-a408-8563e20a543a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.471823 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2ws\" (UniqueName: \"kubernetes.io/projected/36847140-0ea3-4683-a408-8563e20a543a-kube-api-access-sj2ws\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.573817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-public-tls-certs\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.574420 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1182b131-3e0d-417a-8f50-4e0b98e7635f-log-httpd\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.574524 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1182b131-3e0d-417a-8f50-4e0b98e7635f-run-httpd\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.574658 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-config-data\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.574748 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-internal-tls-certs\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.574849 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgfh8\" (UniqueName: \"kubernetes.io/projected/1182b131-3e0d-417a-8f50-4e0b98e7635f-kube-api-access-lgfh8\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.575019 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1182b131-3e0d-417a-8f50-4e0b98e7635f-etc-swift\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.575110 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-combined-ca-bundle\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.677846 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-public-tls-certs\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.677909 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1182b131-3e0d-417a-8f50-4e0b98e7635f-log-httpd\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.677939 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1182b131-3e0d-417a-8f50-4e0b98e7635f-run-httpd\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.678001 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-config-data\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.678042 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-internal-tls-certs\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.678075 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgfh8\" (UniqueName: \"kubernetes.io/projected/1182b131-3e0d-417a-8f50-4e0b98e7635f-kube-api-access-lgfh8\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.678101 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1182b131-3e0d-417a-8f50-4e0b98e7635f-etc-swift\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.678117 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-combined-ca-bundle\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.679216 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1182b131-3e0d-417a-8f50-4e0b98e7635f-log-httpd\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.682496 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-config-data\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.683775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1182b131-3e0d-417a-8f50-4e0b98e7635f-run-httpd\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.685895 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-internal-tls-certs\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.688389 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-public-tls-certs\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.691766 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1182b131-3e0d-417a-8f50-4e0b98e7635f-combined-ca-bundle\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.693131 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1182b131-3e0d-417a-8f50-4e0b98e7635f-etc-swift\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.693261 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgfh8\" (UniqueName: \"kubernetes.io/projected/1182b131-3e0d-417a-8f50-4e0b98e7635f-kube-api-access-lgfh8\") pod \"swift-proxy-5d89bf9699-kpnnn\" (UID: \"1182b131-3e0d-417a-8f50-4e0b98e7635f\") " pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.721172 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.746399 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 09:17:01 crc kubenswrapper[4720]: I0202 09:17:01.809570 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.151994 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7db56589cb-hzwrj" event={"ID":"36847140-0ea3-4683-a408-8563e20a543a","Type":"ContainerDied","Data":"3290be6410d8e81c79b8f749e7ba54a2f31f41736837afd833134aa4860d6bdc"} Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.152227 4720 scope.go:117] "RemoveContainer" containerID="0499c590610fc5cb47e1f89d7ef38de9c057dc7e428daed25d7bb0eb40ddd90d" Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.152368 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7db56589cb-hzwrj" Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.173099 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9f8bec59-9988-4424-9eab-98f0ea954808","Type":"ContainerStarted","Data":"27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452"} Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.173464 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9f8bec59-9988-4424-9eab-98f0ea954808","Type":"ContainerStarted","Data":"7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55"} Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.189012 4720 scope.go:117] "RemoveContainer" containerID="0a9be0241df1a16f26b27e8b87dc94bdbd3545abeba84b765a69bafd98906c59" Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.192054 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7db56589cb-hzwrj"] Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.203086 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7db56589cb-hzwrj"] Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.210150 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.910094609 podStartE2EDuration="19.210130831s" podCreationTimestamp="2026-02-02 09:16:43 +0000 UTC" firstStartedPulling="2026-02-02 09:16:44.686950707 +0000 UTC m=+1238.542576253" lastFinishedPulling="2026-02-02 09:17:00.986986919 +0000 UTC m=+1254.842612475" observedRunningTime="2026-02-02 09:17:02.196846325 +0000 UTC m=+1256.052471881" watchObservedRunningTime="2026-02-02 09:17:02.210130831 +0000 UTC m=+1256.065756387" Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.296460 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d89bf9699-kpnnn"] Feb 02 09:17:02 crc kubenswrapper[4720]: I0202 09:17:02.912500 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36847140-0ea3-4683-a408-8563e20a543a" path="/var/lib/kubelet/pods/36847140-0ea3-4683-a408-8563e20a543a/volumes" Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.168764 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.169094 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-central-agent" containerID="cri-o://c76f175a3035712ace8e082ee20d11fc9c85bbaaec885902443cc416ba7f3a4d" gracePeriod=30 Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.169202 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-notification-agent" containerID="cri-o://c1914132af0e590e9744c50fa77eb7c6e206f4e81af63415a3aef690c411fc39" gracePeriod=30 Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.169210 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="sg-core" containerID="cri-o://10311d70ad6352264d3bc3ebd756bf6ec5403094a65d8cb6e49e74ba299f9923" gracePeriod=30 Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.169397 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="proxy-httpd" containerID="cri-o://f8d1bffd3e986a5115ba8117bd84225d10b3385a7b6d94122747e828ddf583d0" gracePeriod=30 Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.178059 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.194558 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d89bf9699-kpnnn" event={"ID":"1182b131-3e0d-417a-8f50-4e0b98e7635f","Type":"ContainerStarted","Data":"43d7a485e0946402cfd68c2132959f03e303324fe0ba4f11b49da47a74025e09"} Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.194603 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d89bf9699-kpnnn" event={"ID":"1182b131-3e0d-417a-8f50-4e0b98e7635f","Type":"ContainerStarted","Data":"774167a673d0818000c27055dd1417f27f0b20a96d4e9396f7d91a75fb07c9aa"} Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.194613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d89bf9699-kpnnn" event={"ID":"1182b131-3e0d-417a-8f50-4e0b98e7635f","Type":"ContainerStarted","Data":"3d9ab3ec9bbf85246051e84fe2484df97e17b5083c787193f902e4a15922b573"} Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.194737 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.662202 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 09:17:03 crc kubenswrapper[4720]: I0202 09:17:03.683598 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.206789 4720 generic.go:334] "Generic (PLEG): container finished" podID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerID="f8d1bffd3e986a5115ba8117bd84225d10b3385a7b6d94122747e828ddf583d0" exitCode=0 Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.207259 4720 generic.go:334] "Generic (PLEG): container finished" podID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerID="10311d70ad6352264d3bc3ebd756bf6ec5403094a65d8cb6e49e74ba299f9923" exitCode=2 Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.207279 4720 generic.go:334] "Generic (PLEG): container finished" podID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerID="c76f175a3035712ace8e082ee20d11fc9c85bbaaec885902443cc416ba7f3a4d" exitCode=0 Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.206856 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerDied","Data":"f8d1bffd3e986a5115ba8117bd84225d10b3385a7b6d94122747e828ddf583d0"} Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.207329 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerDied","Data":"10311d70ad6352264d3bc3ebd756bf6ec5403094a65d8cb6e49e74ba299f9923"} Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.207344 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerDied","Data":"c76f175a3035712ace8e082ee20d11fc9c85bbaaec885902443cc416ba7f3a4d"} Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.207572 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:04 crc kubenswrapper[4720]: I0202 09:17:04.980330 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.006370 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d89bf9699-kpnnn" podStartSLOduration=4.00634906 podStartE2EDuration="4.00634906s" podCreationTimestamp="2026-02-02 09:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:03.237603573 +0000 UTC m=+1257.093229119" watchObservedRunningTime="2026-02-02 09:17:05.00634906 +0000 UTC m=+1258.861974616" Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.025612 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.215166 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="manila-scheduler" containerID="cri-o://f67912c1cdbc3ee37193a4d855b6fda3ff2dc6756402c2cb13731af256916048" gracePeriod=30 Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.215207 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="probe" containerID="cri-o://4b9b38d9502b2a20ad9dc133d4736acadf6582fbfcaef7638c5640330db51aa6" gracePeriod=30 Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.823575 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.824314 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-log" containerID="cri-o://86e0b7986f32cfae7cdb6729b14924c5af900de8c80481e6758d2d7f4aad991c" gracePeriod=30 Feb 02 09:17:05 crc kubenswrapper[4720]: I0202 09:17:05.824468 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-httpd" containerID="cri-o://0063bc7e32cc8c9fe202eaea552f1c6222a45bbe87ed28e737dce52606ee371e" gracePeriod=30 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.247301 4720 generic.go:334] "Generic (PLEG): container finished" podID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerID="4b9b38d9502b2a20ad9dc133d4736acadf6582fbfcaef7638c5640330db51aa6" exitCode=0 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.247359 4720 generic.go:334] "Generic (PLEG): container finished" podID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerID="f67912c1cdbc3ee37193a4d855b6fda3ff2dc6756402c2cb13731af256916048" exitCode=0 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.247381 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4bc972d9-758e-4a27-9d67-ce14b4ece48b","Type":"ContainerDied","Data":"4b9b38d9502b2a20ad9dc133d4736acadf6582fbfcaef7638c5640330db51aa6"} Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.247451 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4bc972d9-758e-4a27-9d67-ce14b4ece48b","Type":"ContainerDied","Data":"f67912c1cdbc3ee37193a4d855b6fda3ff2dc6756402c2cb13731af256916048"} Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.251050 4720 generic.go:334] "Generic (PLEG): container finished" podID="687b9563-476f-485d-bfd5-8f874470c4f2" containerID="86e0b7986f32cfae7cdb6729b14924c5af900de8c80481e6758d2d7f4aad991c" exitCode=143 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.251171 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"687b9563-476f-485d-bfd5-8f874470c4f2","Type":"ContainerDied","Data":"86e0b7986f32cfae7cdb6729b14924c5af900de8c80481e6758d2d7f4aad991c"} Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.254136 4720 generic.go:334] "Generic (PLEG): container finished" podID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerID="c1914132af0e590e9744c50fa77eb7c6e206f4e81af63415a3aef690c411fc39" exitCode=0 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.254169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerDied","Data":"c1914132af0e590e9744c50fa77eb7c6e206f4e81af63415a3aef690c411fc39"} Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.474567 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.474920 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-log" containerID="cri-o://dea98d1476ace63d0b9e8c01c5088ee7147c9d41651dd519173f31d3ed05e345" gracePeriod=30 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.475359 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-httpd" containerID="cri-o://4f088e68bb9136fb2756c3d0dd16bf845f09f5d0ccc55ea9b1e7e142542bb1a1" gracePeriod=30 Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.765335 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-snbz6"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.766398 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.781780 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-snbz6"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.858999 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mxwpk"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.860733 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.869329 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mxwpk"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.880860 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gct\" (UniqueName: \"kubernetes.io/projected/8db10941-ba5e-445a-a995-bd1493d5270c-kube-api-access-t5gct\") pod \"nova-api-db-create-snbz6\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.880978 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db10941-ba5e-445a-a995-bd1493d5270c-operator-scripts\") pod \"nova-api-db-create-snbz6\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.968336 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9e5c-account-create-update-z6qrn"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.969411 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.973747 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.980545 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rqpbp"] Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.981753 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.983191 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db10941-ba5e-445a-a995-bd1493d5270c-operator-scripts\") pod \"nova-api-db-create-snbz6\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.983267 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b62cf19-56cf-4b24-bf4b-417906e61501-operator-scripts\") pod \"nova-cell0-db-create-mxwpk\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.983344 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgvq\" (UniqueName: \"kubernetes.io/projected/0b62cf19-56cf-4b24-bf4b-417906e61501-kube-api-access-zbgvq\") pod \"nova-cell0-db-create-mxwpk\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.983385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gct\" (UniqueName: \"kubernetes.io/projected/8db10941-ba5e-445a-a995-bd1493d5270c-kube-api-access-t5gct\") pod \"nova-api-db-create-snbz6\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:06 crc kubenswrapper[4720]: I0202 09:17:06.984155 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db10941-ba5e-445a-a995-bd1493d5270c-operator-scripts\") pod \"nova-api-db-create-snbz6\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.004253 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gct\" (UniqueName: \"kubernetes.io/projected/8db10941-ba5e-445a-a995-bd1493d5270c-kube-api-access-t5gct\") pod \"nova-api-db-create-snbz6\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.007407 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9e5c-account-create-update-z6qrn"] Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.015616 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rqpbp"] Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.084986 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87af8537-923b-4bee-8c85-aa7f3d179b6d-operator-scripts\") pod \"nova-api-9e5c-account-create-update-z6qrn\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.085126 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqvj\" (UniqueName: \"kubernetes.io/projected/87af8537-923b-4bee-8c85-aa7f3d179b6d-kube-api-access-pnqvj\") pod \"nova-api-9e5c-account-create-update-z6qrn\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.085180 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b62cf19-56cf-4b24-bf4b-417906e61501-operator-scripts\") pod \"nova-cell0-db-create-mxwpk\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.085249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgvq\" (UniqueName: \"kubernetes.io/projected/0b62cf19-56cf-4b24-bf4b-417906e61501-kube-api-access-zbgvq\") pod \"nova-cell0-db-create-mxwpk\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.085271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-operator-scripts\") pod \"nova-cell1-db-create-rqpbp\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.085292 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59ws\" (UniqueName: \"kubernetes.io/projected/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-kube-api-access-c59ws\") pod \"nova-cell1-db-create-rqpbp\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.085939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b62cf19-56cf-4b24-bf4b-417906e61501-operator-scripts\") pod \"nova-cell0-db-create-mxwpk\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.113440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgvq\" (UniqueName: \"kubernetes.io/projected/0b62cf19-56cf-4b24-bf4b-417906e61501-kube-api-access-zbgvq\") pod \"nova-cell0-db-create-mxwpk\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.140657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.163758 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ad5e-account-create-update-lk57d"] Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.164913 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.167075 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.173304 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ad5e-account-create-update-lk57d"] Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.178958 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.188584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-operator-scripts\") pod \"nova-cell1-db-create-rqpbp\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.188629 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59ws\" (UniqueName: \"kubernetes.io/projected/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-kube-api-access-c59ws\") pod \"nova-cell1-db-create-rqpbp\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.188653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87af8537-923b-4bee-8c85-aa7f3d179b6d-operator-scripts\") pod \"nova-api-9e5c-account-create-update-z6qrn\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.188732 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqvj\" (UniqueName: \"kubernetes.io/projected/87af8537-923b-4bee-8c85-aa7f3d179b6d-kube-api-access-pnqvj\") pod \"nova-api-9e5c-account-create-update-z6qrn\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.189649 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-operator-scripts\") pod \"nova-cell1-db-create-rqpbp\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.190673 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87af8537-923b-4bee-8c85-aa7f3d179b6d-operator-scripts\") pod \"nova-api-9e5c-account-create-update-z6qrn\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.204786 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqvj\" (UniqueName: \"kubernetes.io/projected/87af8537-923b-4bee-8c85-aa7f3d179b6d-kube-api-access-pnqvj\") pod \"nova-api-9e5c-account-create-update-z6qrn\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.205795 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59ws\" (UniqueName: \"kubernetes.io/projected/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-kube-api-access-c59ws\") pod \"nova-cell1-db-create-rqpbp\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.275042 4720 generic.go:334] "Generic (PLEG): container finished" podID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerID="dea98d1476ace63d0b9e8c01c5088ee7147c9d41651dd519173f31d3ed05e345" exitCode=143 Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.275118 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0fbf42a7-347e-4355-afc5-4e70bbf58271","Type":"ContainerDied","Data":"dea98d1476ace63d0b9e8c01c5088ee7147c9d41651dd519173f31d3ed05e345"} Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.290813 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.293383 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwq9\" (UniqueName: \"kubernetes.io/projected/1550113c-09da-4c3e-9ee1-cd4f28eaa995-kube-api-access-skwq9\") pod \"nova-cell0-ad5e-account-create-update-lk57d\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.294184 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1550113c-09da-4c3e-9ee1-cd4f28eaa995-operator-scripts\") pod \"nova-cell0-ad5e-account-create-update-lk57d\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.304471 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.373721 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-278f-account-create-update-pgskx"] Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.375049 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.377221 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.396762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwq9\" (UniqueName: \"kubernetes.io/projected/1550113c-09da-4c3e-9ee1-cd4f28eaa995-kube-api-access-skwq9\") pod \"nova-cell0-ad5e-account-create-update-lk57d\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.397340 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1550113c-09da-4c3e-9ee1-cd4f28eaa995-operator-scripts\") pod \"nova-cell0-ad5e-account-create-update-lk57d\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.398117 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1550113c-09da-4c3e-9ee1-cd4f28eaa995-operator-scripts\") pod \"nova-cell0-ad5e-account-create-update-lk57d\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.403508 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-278f-account-create-update-pgskx"] Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.413637 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwq9\" (UniqueName: \"kubernetes.io/projected/1550113c-09da-4c3e-9ee1-cd4f28eaa995-kube-api-access-skwq9\") pod \"nova-cell0-ad5e-account-create-update-lk57d\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.481445 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.498974 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-operator-scripts\") pod \"nova-cell1-278f-account-create-update-pgskx\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.499063 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stb47\" (UniqueName: \"kubernetes.io/projected/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-kube-api-access-stb47\") pod \"nova-cell1-278f-account-create-update-pgskx\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.600499 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-operator-scripts\") pod \"nova-cell1-278f-account-create-update-pgskx\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.600963 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stb47\" (UniqueName: \"kubernetes.io/projected/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-kube-api-access-stb47\") pod \"nova-cell1-278f-account-create-update-pgskx\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.601607 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-operator-scripts\") pod \"nova-cell1-278f-account-create-update-pgskx\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.616768 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stb47\" (UniqueName: \"kubernetes.io/projected/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-kube-api-access-stb47\") pod \"nova-cell1-278f-account-create-update-pgskx\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:07 crc kubenswrapper[4720]: I0202 09:17:07.701342 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:09 crc kubenswrapper[4720]: I0202 09:17:09.297494 4720 generic.go:334] "Generic (PLEG): container finished" podID="687b9563-476f-485d-bfd5-8f874470c4f2" containerID="0063bc7e32cc8c9fe202eaea552f1c6222a45bbe87ed28e737dce52606ee371e" exitCode=0 Feb 02 09:17:09 crc kubenswrapper[4720]: I0202 09:17:09.297550 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"687b9563-476f-485d-bfd5-8f874470c4f2","Type":"ContainerDied","Data":"0063bc7e32cc8c9fe202eaea552f1c6222a45bbe87ed28e737dce52606ee371e"} Feb 02 09:17:10 crc kubenswrapper[4720]: I0202 09:17:10.309936 4720 generic.go:334] "Generic (PLEG): container finished" podID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerID="4f088e68bb9136fb2756c3d0dd16bf845f09f5d0ccc55ea9b1e7e142542bb1a1" exitCode=0 Feb 02 09:17:10 crc kubenswrapper[4720]: I0202 09:17:10.310002 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0fbf42a7-347e-4355-afc5-4e70bbf58271","Type":"ContainerDied","Data":"4f088e68bb9136fb2756c3d0dd16bf845f09f5d0ccc55ea9b1e7e142542bb1a1"} Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.682758 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.733846 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.742265 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d89bf9699-kpnnn" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794222 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-config-data\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794243 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-combined-ca-bundle\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794313 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5crx\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-kube-api-access-h5crx\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794347 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-httpd-run\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794429 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-ceph\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794472 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-logs\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794502 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-scripts\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.794547 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-internal-tls-certs\") pod \"687b9563-476f-485d-bfd5-8f874470c4f2\" (UID: \"687b9563-476f-485d-bfd5-8f874470c4f2\") " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.804865 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-scripts" (OuterVolumeSpecName: "scripts") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.808809 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.809212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-logs" (OuterVolumeSpecName: "logs") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.809249 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-ceph" (OuterVolumeSpecName: "ceph") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.810546 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.811724 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-kube-api-access-h5crx" (OuterVolumeSpecName: "kube-api-access-h5crx") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "kube-api-access-h5crx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.878774 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897541 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897581 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5crx\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-kube-api-access-h5crx\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897591 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897601 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/687b9563-476f-485d-bfd5-8f874470c4f2-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897610 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/687b9563-476f-485d-bfd5-8f874470c4f2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897618 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.897650 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.922043 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.929940 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.932069 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-config-data" (OuterVolumeSpecName: "config-data") pod "687b9563-476f-485d-bfd5-8f874470c4f2" (UID: "687b9563-476f-485d-bfd5-8f874470c4f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.989184 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:11 crc kubenswrapper[4720]: I0202 09:17:11.998032 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.000234 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.000278 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.000288 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/687b9563-476f-485d-bfd5-8f874470c4f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.100948 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-run-httpd\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-scripts\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101095 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-config-data\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101139 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-logs\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101153 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-scripts\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101172 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmpd\" (UniqueName: \"kubernetes.io/projected/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-kube-api-access-2cmpd\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101205 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-public-tls-certs\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101221 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-combined-ca-bundle\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101236 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-ceph\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101256 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-log-httpd\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101277 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101302 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-combined-ca-bundle\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101325 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-sg-core-conf-yaml\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101352 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klmrc\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-kube-api-access-klmrc\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101433 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-httpd-run\") pod \"0fbf42a7-347e-4355-afc5-4e70bbf58271\" (UID: \"0fbf42a7-347e-4355-afc5-4e70bbf58271\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.101455 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-config-data\") pod \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\" (UID: \"dcb2b63a-ae1d-4400-877d-92cacdddfcbe\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.105165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-logs" (OuterVolumeSpecName: "logs") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.105962 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.107422 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.112818 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.116285 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.116307 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.116315 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.116323 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fbf42a7-347e-4355-afc5-4e70bbf58271-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.117388 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-kube-api-access-2cmpd" (OuterVolumeSpecName: "kube-api-access-2cmpd") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "kube-api-access-2cmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.146498 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-scripts" (OuterVolumeSpecName: "scripts") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.146665 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-scripts" (OuterVolumeSpecName: "scripts") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.148083 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.148286 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-kube-api-access-klmrc" (OuterVolumeSpecName: "kube-api-access-klmrc") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "kube-api-access-klmrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.159538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-ceph" (OuterVolumeSpecName: "ceph") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.188599 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220094 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220126 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220136 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmpd\" (UniqueName: \"kubernetes.io/projected/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-kube-api-access-2cmpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220145 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220154 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220177 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.220187 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klmrc\" (UniqueName: \"kubernetes.io/projected/0fbf42a7-347e-4355-afc5-4e70bbf58271-kube-api-access-klmrc\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.278012 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.282920 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.283612 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.303128 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mxwpk"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.303182 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.317177 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-snbz6"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.326040 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-278f-account-create-update-pgskx"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.331314 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.331333 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.331343 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.331352 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.335364 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9e5c-account-create-update-z6qrn"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.410203 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-config-data" (OuterVolumeSpecName: "config-data") pod "0fbf42a7-347e-4355-afc5-4e70bbf58271" (UID: "0fbf42a7-347e-4355-afc5-4e70bbf58271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.412248 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.412984 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcb2b63a-ae1d-4400-877d-92cacdddfcbe","Type":"ContainerDied","Data":"46847029799ff372116831cbeffd6df8e0a028cfc31628b0546d6a33d871d2e6"} Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.413032 4720 scope.go:117] "RemoveContainer" containerID="f8d1bffd3e986a5115ba8117bd84225d10b3385a7b6d94122747e828ddf583d0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.416683 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-config-data" (OuterVolumeSpecName: "config-data") pod "dcb2b63a-ae1d-4400-877d-92cacdddfcbe" (UID: "dcb2b63a-ae1d-4400-877d-92cacdddfcbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.423570 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f50a6a2b-2c12-435d-801c-f97f65cf36f9","Type":"ContainerStarted","Data":"62a23036da32ab5888aef5bc2f7b1d9c1ad1207203bcc9b4edf622488599ea34"} Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.430719 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.431108 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4bc972d9-758e-4a27-9d67-ce14b4ece48b","Type":"ContainerDied","Data":"5c72e8f4c8d4762000ea969511156ec8177a09bc86912860d82ba12990b2a912"} Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.432899 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb2b63a-ae1d-4400-877d-92cacdddfcbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.432915 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fbf42a7-347e-4355-afc5-4e70bbf58271-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.433308 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxwpk" event={"ID":"0b62cf19-56cf-4b24-bf4b-417906e61501","Type":"ContainerStarted","Data":"d246bb8e7b3b3133a30c7273db390b7165d4a83ce46ed66e2f7345bfe53da290"} Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.434491 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0fbf42a7-347e-4355-afc5-4e70bbf58271","Type":"ContainerDied","Data":"d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e"} Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.434542 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.451167 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"687b9563-476f-485d-bfd5-8f874470c4f2","Type":"ContainerDied","Data":"3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371"} Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.451213 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.477631 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.064411906 podStartE2EDuration="15.477615558s" podCreationTimestamp="2026-02-02 09:16:57 +0000 UTC" firstStartedPulling="2026-02-02 09:17:00.888356941 +0000 UTC m=+1254.743982497" lastFinishedPulling="2026-02-02 09:17:11.301560593 +0000 UTC m=+1265.157186149" observedRunningTime="2026-02-02 09:17:12.448998198 +0000 UTC m=+1266.304623754" watchObservedRunningTime="2026-02-02 09:17:12.477615558 +0000 UTC m=+1266.333241114" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.478873 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ad5e-account-create-update-lk57d"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.511939 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rqpbp"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.533827 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data\") pod \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.533979 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data-custom\") pod \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.534043 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-scripts\") pod \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.534091 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-combined-ca-bundle\") pod \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.534162 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc972d9-758e-4a27-9d67-ce14b4ece48b-etc-machine-id\") pod \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.534213 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzj27\" (UniqueName: \"kubernetes.io/projected/4bc972d9-758e-4a27-9d67-ce14b4ece48b-kube-api-access-vzj27\") pod \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\" (UID: \"4bc972d9-758e-4a27-9d67-ce14b4ece48b\") " Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.537872 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bc972d9-758e-4a27-9d67-ce14b4ece48b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4bc972d9-758e-4a27-9d67-ce14b4ece48b" (UID: "4bc972d9-758e-4a27-9d67-ce14b4ece48b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.546031 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc972d9-758e-4a27-9d67-ce14b4ece48b-kube-api-access-vzj27" (OuterVolumeSpecName: "kube-api-access-vzj27") pod "4bc972d9-758e-4a27-9d67-ce14b4ece48b" (UID: "4bc972d9-758e-4a27-9d67-ce14b4ece48b"). InnerVolumeSpecName "kube-api-access-vzj27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.550016 4720 scope.go:117] "RemoveContainer" containerID="10311d70ad6352264d3bc3ebd756bf6ec5403094a65d8cb6e49e74ba299f9923" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.563084 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4bc972d9-758e-4a27-9d67-ce14b4ece48b" (UID: "4bc972d9-758e-4a27-9d67-ce14b4ece48b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.564573 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-scripts" (OuterVolumeSpecName: "scripts") pod "4bc972d9-758e-4a27-9d67-ce14b4ece48b" (UID: "4bc972d9-758e-4a27-9d67-ce14b4ece48b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.566013 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.580858 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.609983 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610375 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="sg-core" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610393 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="sg-core" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610404 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610410 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610423 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-central-agent" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610430 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-central-agent" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610441 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-notification-agent" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610448 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-notification-agent" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610463 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-log" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610469 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-log" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610477 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610482 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610493 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="proxy-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610498 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="proxy-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610510 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="manila-scheduler" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610515 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="manila-scheduler" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610524 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="probe" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610530 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="probe" Feb 02 09:17:12 crc kubenswrapper[4720]: E0202 09:17:12.610542 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-log" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610548 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-log" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610707 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="sg-core" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610722 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="proxy-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610734 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-central-agent" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610746 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-log" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610758 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="probe" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610768 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" containerName="glance-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610781 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" containerName="manila-scheduler" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610791 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-log" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610800 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" containerName="glance-httpd" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.610811 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" containerName="ceilometer-notification-agent" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.611737 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.613204 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dmxj" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.614401 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.614670 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.614910 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.621002 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.631334 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.636090 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.636121 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.636132 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc972d9-758e-4a27-9d67-ce14b4ece48b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.636140 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzj27\" (UniqueName: \"kubernetes.io/projected/4bc972d9-758e-4a27-9d67-ce14b4ece48b-kube-api-access-vzj27\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.655069 4720 scope.go:117] "RemoveContainer" containerID="c1914132af0e590e9744c50fa77eb7c6e206f4e81af63415a3aef690c411fc39" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.663928 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.673849 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.675132 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc972d9-758e-4a27-9d67-ce14b4ece48b" (UID: "4bc972d9-758e-4a27-9d67-ce14b4ece48b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.675463 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.680987 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.681632 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.689282 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742516 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dbdee974-abaf-4569-b6d0-e2efe90a53b1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdee974-abaf-4569-b6d0-e2efe90a53b1-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742640 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742704 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742723 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742739 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfcg\" (UniqueName: \"kubernetes.io/projected/dbdee974-abaf-4569-b6d0-e2efe90a53b1-kube-api-access-jgfcg\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742808 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbdee974-abaf-4569-b6d0-e2efe90a53b1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.742852 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.776070 4720 scope.go:117] "RemoveContainer" containerID="c76f175a3035712ace8e082ee20d11fc9c85bbaaec885902443cc416ba7f3a4d" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.844933 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1644b005-02e0-41ed-a421-289e79e6968f-logs\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845038 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1644b005-02e0-41ed-a421-289e79e6968f-ceph\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845089 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1644b005-02e0-41ed-a421-289e79e6968f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845183 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbdee974-abaf-4569-b6d0-e2efe90a53b1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845258 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845369 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dbdee974-abaf-4569-b6d0-e2efe90a53b1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845443 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845486 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6vml\" (UniqueName: \"kubernetes.io/projected/1644b005-02e0-41ed-a421-289e79e6968f-kube-api-access-h6vml\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845554 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdee974-abaf-4569-b6d0-e2efe90a53b1-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845610 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845630 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-scripts\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845670 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-config-data\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845712 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845821 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845861 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845902 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845921 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfcg\" (UniqueName: \"kubernetes.io/projected/dbdee974-abaf-4569-b6d0-e2efe90a53b1-kube-api-access-jgfcg\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.845943 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.846320 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.846608 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.848675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbdee974-abaf-4569-b6d0-e2efe90a53b1-logs\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.850946 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbdee974-abaf-4569-b6d0-e2efe90a53b1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.857544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.866606 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.867349 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.872748 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbdee974-abaf-4569-b6d0-e2efe90a53b1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.873773 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfcg\" (UniqueName: \"kubernetes.io/projected/dbdee974-abaf-4569-b6d0-e2efe90a53b1-kube-api-access-jgfcg\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.879358 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dbdee974-abaf-4569-b6d0-e2efe90a53b1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.922686 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbf42a7-347e-4355-afc5-4e70bbf58271" path="/var/lib/kubelet/pods/0fbf42a7-347e-4355-afc5-4e70bbf58271/volumes" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.936635 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687b9563-476f-485d-bfd5-8f874470c4f2" path="/var/lib/kubelet/pods/687b9563-476f-485d-bfd5-8f874470c4f2/volumes" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.945370 4720 scope.go:117] "RemoveContainer" containerID="4b9b38d9502b2a20ad9dc133d4736acadf6582fbfcaef7638c5640330db51aa6" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950197 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950233 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-scripts\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950273 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-config-data\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950410 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950503 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1644b005-02e0-41ed-a421-289e79e6968f-logs\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1644b005-02e0-41ed-a421-289e79e6968f-ceph\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950570 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1644b005-02e0-41ed-a421-289e79e6968f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950728 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.950821 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6vml\" (UniqueName: \"kubernetes.io/projected/1644b005-02e0-41ed-a421-289e79e6968f-kube-api-access-h6vml\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.957015 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1644b005-02e0-41ed-a421-289e79e6968f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.960192 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.971549 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.972742 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.981151 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1644b005-02e0-41ed-a421-289e79e6968f-logs\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.984406 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-config-data\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.986609 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1644b005-02e0-41ed-a421-289e79e6968f-ceph\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.998624 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-scripts\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:12 crc kubenswrapper[4720]: I0202 09:17:12.998697 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.004378 4720 scope.go:117] "RemoveContainer" containerID="f67912c1cdbc3ee37193a4d855b6fda3ff2dc6756402c2cb13731af256916048" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.005449 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1644b005-02e0-41ed-a421-289e79e6968f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.008105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6vml\" (UniqueName: \"kubernetes.io/projected/1644b005-02e0-41ed-a421-289e79e6968f-kube-api-access-h6vml\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.013974 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.016287 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.019544 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.026636 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.032672 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.046452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"dbdee974-abaf-4569-b6d0-e2efe90a53b1\") " pod="openstack/glance-default-internal-api-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.125180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data" (OuterVolumeSpecName: "config-data") pod "4bc972d9-758e-4a27-9d67-ce14b4ece48b" (UID: "4bc972d9-758e-4a27-9d67-ce14b4ece48b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.155385 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-config-data\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.155462 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.155487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhfc\" (UniqueName: \"kubernetes.io/projected/2d52497c-e7d6-4a84-a54a-7d463a7038df-kube-api-access-hrhfc\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.155600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.156176 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-run-httpd\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.156257 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-log-httpd\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.156279 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-scripts\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.156338 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc972d9-758e-4a27-9d67-ce14b4ece48b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.167949 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1644b005-02e0-41ed-a421-289e79e6968f\") " pod="openstack/glance-default-external-api-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.211502 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.246260 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.258665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.259299 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-run-httpd\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.259382 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-log-httpd\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.259399 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-scripts\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.259430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-config-data\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.259470 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.259493 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhfc\" (UniqueName: \"kubernetes.io/projected/2d52497c-e7d6-4a84-a54a-7d463a7038df-kube-api-access-hrhfc\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.260465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-log-httpd\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.260775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-run-httpd\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.264953 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-scripts\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.267553 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.268580 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-config-data\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.270906 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.295501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhfc\" (UniqueName: \"kubernetes.io/projected/2d52497c-e7d6-4a84-a54a-7d463a7038df-kube-api-access-hrhfc\") pod \"ceilometer-0\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.450717 4720 scope.go:117] "RemoveContainer" containerID="4f088e68bb9136fb2756c3d0dd16bf845f09f5d0ccc55ea9b1e7e142542bb1a1" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.469982 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.498741 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" event={"ID":"87af8537-923b-4bee-8c85-aa7f3d179b6d","Type":"ContainerStarted","Data":"4703a8751a041adc0461ab9ca0d48f9fa6c48eca341014be6534c4c7f573f9e0"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.500067 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.511124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxwpk" event={"ID":"0b62cf19-56cf-4b24-bf4b-417906e61501","Type":"ContainerStarted","Data":"24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.526394 4720 generic.go:334] "Generic (PLEG): container finished" podID="92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" containerID="42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46" exitCode=0 Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.526471 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-278f-account-create-update-pgskx" event={"ID":"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7","Type":"ContainerDied","Data":"42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.526501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-278f-account-create-update-pgskx" event={"ID":"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7","Type":"ContainerStarted","Data":"4287574e4225381fe530a38ed3c97bde02735f928f3de6e4ca52ad00bd4135d6"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.543483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" event={"ID":"1550113c-09da-4c3e-9ee1-cd4f28eaa995","Type":"ContainerStarted","Data":"ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.543533 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" event={"ID":"1550113c-09da-4c3e-9ee1-cd4f28eaa995","Type":"ContainerStarted","Data":"83441b6406ccdbc1f9926715144256e507082a919707d7aab5513abb2775da28"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.547835 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rqpbp" event={"ID":"c20dd138-dcb5-4c76-905c-b9eb86dfd50b","Type":"ContainerStarted","Data":"5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.547875 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rqpbp" event={"ID":"c20dd138-dcb5-4c76-905c-b9eb86dfd50b","Type":"ContainerStarted","Data":"f0aec05d3a20da59ccb47f05d12a40aec67abe82ef1f1e2420564058f1836915"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.574930 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" podStartSLOduration=6.574912405 podStartE2EDuration="6.574912405s" podCreationTimestamp="2026-02-02 09:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:13.568847881 +0000 UTC m=+1267.424473437" watchObservedRunningTime="2026-02-02 09:17:13.574912405 +0000 UTC m=+1267.430537961" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.577317 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snbz6" event={"ID":"8db10941-ba5e-445a-a995-bd1493d5270c","Type":"ContainerStarted","Data":"d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.577339 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snbz6" event={"ID":"8db10941-ba5e-445a-a995-bd1493d5270c","Type":"ContainerStarted","Data":"fca74e70efcc3b440b347108cf13a7824bb88d4440282246ec348620c818e140"} Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.684037 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b896b6bb4-gxblv" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.732861 4720 scope.go:117] "RemoveContainer" containerID="dea98d1476ace63d0b9e8c01c5088ee7147c9d41651dd519173f31d3ed05e345" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.752110 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.775103 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.799733 4720 scope.go:117] "RemoveContainer" containerID="0063bc7e32cc8c9fe202eaea552f1c6222a45bbe87ed28e737dce52606ee371e" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.804561 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.806710 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.810846 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.813659 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.821645 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.842132 4720 scope.go:117] "RemoveContainer" containerID="86e0b7986f32cfae7cdb6729b14924c5af900de8c80481e6758d2d7f4aad991c" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.842421 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.878116 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754lw\" (UniqueName: \"kubernetes.io/projected/742e6521-0c7a-4dfe-9c8f-1a086e180d73-kube-api-access-754lw\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.878164 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-config-data\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.878249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.878369 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-scripts\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.878416 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.878629 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/742e6521-0c7a-4dfe-9c8f-1a086e180d73-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.983985 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.984504 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-scripts\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.984546 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.984610 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/742e6521-0c7a-4dfe-9c8f-1a086e180d73-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.984670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754lw\" (UniqueName: \"kubernetes.io/projected/742e6521-0c7a-4dfe-9c8f-1a086e180d73-kube-api-access-754lw\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.984687 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-config-data\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.984712 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.985438 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/742e6521-0c7a-4dfe-9c8f-1a086e180d73-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.998710 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:13 crc kubenswrapper[4720]: I0202 09:17:13.999027 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-scripts\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.000159 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.000551 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e6521-0c7a-4dfe-9c8f-1a086e180d73-config-data\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.002185 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754lw\" (UniqueName: \"kubernetes.io/projected/742e6521-0c7a-4dfe-9c8f-1a086e180d73-kube-api-access-754lw\") pod \"manila-scheduler-0\" (UID: \"742e6521-0c7a-4dfe-9c8f-1a086e180d73\") " pod="openstack/manila-scheduler-0" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.044004 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.132893 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.470859 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dc5779b69-676fs" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.528508 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8565544576-78c6h"] Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.529053 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8565544576-78c6h" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-api" containerID="cri-o://184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4" gracePeriod=30 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.529452 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8565544576-78c6h" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-httpd" containerID="cri-o://6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d" gracePeriod=30 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.591980 4720 generic.go:334] "Generic (PLEG): container finished" podID="0b62cf19-56cf-4b24-bf4b-417906e61501" containerID="24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7" exitCode=0 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.592046 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxwpk" event={"ID":"0b62cf19-56cf-4b24-bf4b-417906e61501","Type":"ContainerDied","Data":"24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.598611 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbdee974-abaf-4569-b6d0-e2efe90a53b1","Type":"ContainerStarted","Data":"52e7587c3ad5f6a497f4ec096d4eb9ad0953e27251b64655001dffe8020839bc"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.601502 4720 generic.go:334] "Generic (PLEG): container finished" podID="c20dd138-dcb5-4c76-905c-b9eb86dfd50b" containerID="5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a" exitCode=0 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.601567 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rqpbp" event={"ID":"c20dd138-dcb5-4c76-905c-b9eb86dfd50b","Type":"ContainerDied","Data":"5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.609870 4720 generic.go:334] "Generic (PLEG): container finished" podID="8db10941-ba5e-445a-a995-bd1493d5270c" containerID="d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70" exitCode=0 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.609938 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snbz6" event={"ID":"8db10941-ba5e-445a-a995-bd1493d5270c","Type":"ContainerDied","Data":"d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.619908 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerStarted","Data":"83312e3600bbcba858784ce8cdcd366cf50a1d62885d8c67c341956b8d8bda23"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.633645 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1644b005-02e0-41ed-a421-289e79e6968f","Type":"ContainerStarted","Data":"4704e17efc45a415c3097826bf3a40d88523b2cef7518e0e9319c085a684ba3d"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.633694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1644b005-02e0-41ed-a421-289e79e6968f","Type":"ContainerStarted","Data":"8451efde3063c96d9e0a1c44773752092d8ead9c86bfaf2032a53cd4bceb3ec1"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.637868 4720 generic.go:334] "Generic (PLEG): container finished" podID="1550113c-09da-4c3e-9ee1-cd4f28eaa995" containerID="ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2" exitCode=0 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.638080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" event={"ID":"1550113c-09da-4c3e-9ee1-cd4f28eaa995","Type":"ContainerDied","Data":"ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.642141 4720 generic.go:334] "Generic (PLEG): container finished" podID="87af8537-923b-4bee-8c85-aa7f3d179b6d" containerID="40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad" exitCode=0 Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.642202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" event={"ID":"87af8537-923b-4bee-8c85-aa7f3d179b6d","Type":"ContainerDied","Data":"40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad"} Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.651681 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.904889 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc972d9-758e-4a27-9d67-ce14b4ece48b" path="/var/lib/kubelet/pods/4bc972d9-758e-4a27-9d67-ce14b4ece48b/volumes" Feb 02 09:17:14 crc kubenswrapper[4720]: I0202 09:17:14.906179 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb2b63a-ae1d-4400-877d-92cacdddfcbe" path="/var/lib/kubelet/pods/dcb2b63a-ae1d-4400-877d-92cacdddfcbe/volumes" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.040271 4720 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6f54a575-b00e-4748-ab42-499cf997a92c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6f54a575-b00e-4748-ab42-499cf997a92c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6f54a575_b00e_4748_ab42_499cf997a92c.slice" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.378118 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.384864 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.424608 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.425424 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533174 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db10941-ba5e-445a-a995-bd1493d5270c-operator-scripts\") pod \"8db10941-ba5e-445a-a995-bd1493d5270c\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533482 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b62cf19-56cf-4b24-bf4b-417906e61501-operator-scripts\") pod \"0b62cf19-56cf-4b24-bf4b-417906e61501\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533513 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c59ws\" (UniqueName: \"kubernetes.io/projected/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-kube-api-access-c59ws\") pod \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533591 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbgvq\" (UniqueName: \"kubernetes.io/projected/0b62cf19-56cf-4b24-bf4b-417906e61501-kube-api-access-zbgvq\") pod \"0b62cf19-56cf-4b24-bf4b-417906e61501\" (UID: \"0b62cf19-56cf-4b24-bf4b-417906e61501\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533612 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-operator-scripts\") pod \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\" (UID: \"c20dd138-dcb5-4c76-905c-b9eb86dfd50b\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533656 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gct\" (UniqueName: \"kubernetes.io/projected/8db10941-ba5e-445a-a995-bd1493d5270c-kube-api-access-t5gct\") pod \"8db10941-ba5e-445a-a995-bd1493d5270c\" (UID: \"8db10941-ba5e-445a-a995-bd1493d5270c\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533696 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-operator-scripts\") pod \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.533719 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stb47\" (UniqueName: \"kubernetes.io/projected/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-kube-api-access-stb47\") pod \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\" (UID: \"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7\") " Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.535206 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62cf19-56cf-4b24-bf4b-417906e61501-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b62cf19-56cf-4b24-bf4b-417906e61501" (UID: "0b62cf19-56cf-4b24-bf4b-417906e61501"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.535579 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db10941-ba5e-445a-a995-bd1493d5270c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8db10941-ba5e-445a-a995-bd1493d5270c" (UID: "8db10941-ba5e-445a-a995-bd1493d5270c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.535966 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c20dd138-dcb5-4c76-905c-b9eb86dfd50b" (UID: "c20dd138-dcb5-4c76-905c-b9eb86dfd50b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.536286 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" (UID: "92fd9095-5cd6-4a99-a5a0-fda750c1a6b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.548656 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-kube-api-access-c59ws" (OuterVolumeSpecName: "kube-api-access-c59ws") pod "c20dd138-dcb5-4c76-905c-b9eb86dfd50b" (UID: "c20dd138-dcb5-4c76-905c-b9eb86dfd50b"). InnerVolumeSpecName "kube-api-access-c59ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.549167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-kube-api-access-stb47" (OuterVolumeSpecName: "kube-api-access-stb47") pod "92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" (UID: "92fd9095-5cd6-4a99-a5a0-fda750c1a6b7"). InnerVolumeSpecName "kube-api-access-stb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.552541 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db10941-ba5e-445a-a995-bd1493d5270c-kube-api-access-t5gct" (OuterVolumeSpecName: "kube-api-access-t5gct") pod "8db10941-ba5e-445a-a995-bd1493d5270c" (UID: "8db10941-ba5e-445a-a995-bd1493d5270c"). InnerVolumeSpecName "kube-api-access-t5gct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.556536 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b62cf19-56cf-4b24-bf4b-417906e61501-kube-api-access-zbgvq" (OuterVolumeSpecName: "kube-api-access-zbgvq") pod "0b62cf19-56cf-4b24-bf4b-417906e61501" (UID: "0b62cf19-56cf-4b24-bf4b-417906e61501"). InnerVolumeSpecName "kube-api-access-zbgvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.635962 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8db10941-ba5e-445a-a995-bd1493d5270c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.635991 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b62cf19-56cf-4b24-bf4b-417906e61501-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.636004 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c59ws\" (UniqueName: \"kubernetes.io/projected/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-kube-api-access-c59ws\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.636013 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbgvq\" (UniqueName: \"kubernetes.io/projected/0b62cf19-56cf-4b24-bf4b-417906e61501-kube-api-access-zbgvq\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.636022 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c20dd138-dcb5-4c76-905c-b9eb86dfd50b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.636031 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gct\" (UniqueName: \"kubernetes.io/projected/8db10941-ba5e-445a-a995-bd1493d5270c-kube-api-access-t5gct\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.636040 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.636048 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stb47\" (UniqueName: \"kubernetes.io/projected/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7-kube-api-access-stb47\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.684998 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbdee974-abaf-4569-b6d0-e2efe90a53b1","Type":"ContainerStarted","Data":"f487b8dcb5eb9f679bc6b3d200d120fc954fd4a328a824f13ce4212b0e041e78"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.691580 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerStarted","Data":"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.698276 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-snbz6" event={"ID":"8db10941-ba5e-445a-a995-bd1493d5270c","Type":"ContainerDied","Data":"fca74e70efcc3b440b347108cf13a7824bb88d4440282246ec348620c818e140"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.698315 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca74e70efcc3b440b347108cf13a7824bb88d4440282246ec348620c818e140" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.698367 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-snbz6" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.708475 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-278f-account-create-update-pgskx" event={"ID":"92fd9095-5cd6-4a99-a5a0-fda750c1a6b7","Type":"ContainerDied","Data":"4287574e4225381fe530a38ed3c97bde02735f928f3de6e4ca52ad00bd4135d6"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.708516 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4287574e4225381fe530a38ed3c97bde02735f928f3de6e4ca52ad00bd4135d6" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.708573 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-278f-account-create-update-pgskx" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.718458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"742e6521-0c7a-4dfe-9c8f-1a086e180d73","Type":"ContainerStarted","Data":"8fab456a1286d8b3967ee279cf2f4386f29fd830fb512e94f2aa4eb247abb504"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.718506 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"742e6521-0c7a-4dfe-9c8f-1a086e180d73","Type":"ContainerStarted","Data":"fccf0e9ba278a2a55da8df20af771cf633a317a4f75b745761a3635b4027b52b"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.722777 4720 generic.go:334] "Generic (PLEG): container finished" podID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerID="6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d" exitCode=0 Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.722843 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8565544576-78c6h" event={"ID":"41b88e2b-4fc1-4047-b366-d5e8571a4c89","Type":"ContainerDied","Data":"6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.723987 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxwpk" event={"ID":"0b62cf19-56cf-4b24-bf4b-417906e61501","Type":"ContainerDied","Data":"d246bb8e7b3b3133a30c7273db390b7165d4a83ce46ed66e2f7345bfe53da290"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.724015 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d246bb8e7b3b3133a30c7273db390b7165d4a83ce46ed66e2f7345bfe53da290" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.724082 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxwpk" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.735504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rqpbp" event={"ID":"c20dd138-dcb5-4c76-905c-b9eb86dfd50b","Type":"ContainerDied","Data":"f0aec05d3a20da59ccb47f05d12a40aec67abe82ef1f1e2420564058f1836915"} Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.735542 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0aec05d3a20da59ccb47f05d12a40aec67abe82ef1f1e2420564058f1836915" Feb 02 09:17:15 crc kubenswrapper[4720]: I0202 09:17:15.735931 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rqpbp" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.107284 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.308040 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.312276 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.323422 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.369101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skwq9\" (UniqueName: \"kubernetes.io/projected/1550113c-09da-4c3e-9ee1-cd4f28eaa995-kube-api-access-skwq9\") pod \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.369209 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1550113c-09da-4c3e-9ee1-cd4f28eaa995-operator-scripts\") pod \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\" (UID: \"1550113c-09da-4c3e-9ee1-cd4f28eaa995\") " Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.370254 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1550113c-09da-4c3e-9ee1-cd4f28eaa995-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1550113c-09da-4c3e-9ee1-cd4f28eaa995" (UID: "1550113c-09da-4c3e-9ee1-cd4f28eaa995"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.412102 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1550113c-09da-4c3e-9ee1-cd4f28eaa995-kube-api-access-skwq9" (OuterVolumeSpecName: "kube-api-access-skwq9") pod "1550113c-09da-4c3e-9ee1-cd4f28eaa995" (UID: "1550113c-09da-4c3e-9ee1-cd4f28eaa995"). InnerVolumeSpecName "kube-api-access-skwq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.470521 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnqvj\" (UniqueName: \"kubernetes.io/projected/87af8537-923b-4bee-8c85-aa7f3d179b6d-kube-api-access-pnqvj\") pod \"87af8537-923b-4bee-8c85-aa7f3d179b6d\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.470611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87af8537-923b-4bee-8c85-aa7f3d179b6d-operator-scripts\") pod \"87af8537-923b-4bee-8c85-aa7f3d179b6d\" (UID: \"87af8537-923b-4bee-8c85-aa7f3d179b6d\") " Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.471095 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skwq9\" (UniqueName: \"kubernetes.io/projected/1550113c-09da-4c3e-9ee1-cd4f28eaa995-kube-api-access-skwq9\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.471112 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1550113c-09da-4c3e-9ee1-cd4f28eaa995-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.471555 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87af8537-923b-4bee-8c85-aa7f3d179b6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87af8537-923b-4bee-8c85-aa7f3d179b6d" (UID: "87af8537-923b-4bee-8c85-aa7f3d179b6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.475104 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87af8537-923b-4bee-8c85-aa7f3d179b6d-kube-api-access-pnqvj" (OuterVolumeSpecName: "kube-api-access-pnqvj") pod "87af8537-923b-4bee-8c85-aa7f3d179b6d" (UID: "87af8537-923b-4bee-8c85-aa7f3d179b6d"). InnerVolumeSpecName "kube-api-access-pnqvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.572760 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnqvj\" (UniqueName: \"kubernetes.io/projected/87af8537-923b-4bee-8c85-aa7f3d179b6d-kube-api-access-pnqvj\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.573039 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87af8537-923b-4bee-8c85-aa7f3d179b6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.744931 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1644b005-02e0-41ed-a421-289e79e6968f","Type":"ContainerStarted","Data":"91783cbf101c5151ae73d705be28cfb3edd8c298382a18830f55de473c95c797"} Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.747007 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.747059 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ad5e-account-create-update-lk57d" event={"ID":"1550113c-09da-4c3e-9ee1-cd4f28eaa995","Type":"ContainerDied","Data":"83441b6406ccdbc1f9926715144256e507082a919707d7aab5513abb2775da28"} Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.747478 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83441b6406ccdbc1f9926715144256e507082a919707d7aab5513abb2775da28" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.748358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" event={"ID":"87af8537-923b-4bee-8c85-aa7f3d179b6d","Type":"ContainerDied","Data":"4703a8751a041adc0461ab9ca0d48f9fa6c48eca341014be6534c4c7f573f9e0"} Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.748450 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4703a8751a041adc0461ab9ca0d48f9fa6c48eca341014be6534c4c7f573f9e0" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.748547 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9e5c-account-create-update-z6qrn" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.753149 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerStarted","Data":"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25"} Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.757569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"742e6521-0c7a-4dfe-9c8f-1a086e180d73","Type":"ContainerStarted","Data":"beeec1dcdd531f243ee1ecf431089ae227a0d9358f2350e003b3d9f69e79205a"} Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.760079 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="manila-share" containerID="cri-o://7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55" gracePeriod=30 Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.760483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dbdee974-abaf-4569-b6d0-e2efe90a53b1","Type":"ContainerStarted","Data":"df1f401e6968e26f62e4345fdfb03285e89b0cd3ba45435307e9dee8134fcc05"} Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.761503 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="probe" containerID="cri-o://27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452" gracePeriod=30 Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.773221 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.773205248 podStartE2EDuration="4.773205248s" podCreationTimestamp="2026-02-02 09:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:16.769540911 +0000 UTC m=+1270.625166477" watchObservedRunningTime="2026-02-02 09:17:16.773205248 +0000 UTC m=+1270.628830804" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.804832 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.804812607 podStartE2EDuration="3.804812607s" podCreationTimestamp="2026-02-02 09:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:16.793282464 +0000 UTC m=+1270.648908020" watchObservedRunningTime="2026-02-02 09:17:16.804812607 +0000 UTC m=+1270.660438163" Feb 02 09:17:16 crc kubenswrapper[4720]: I0202 09:17:16.817364 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.817348954 podStartE2EDuration="4.817348954s" podCreationTimestamp="2026-02-02 09:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:16.815785217 +0000 UTC m=+1270.671410853" watchObservedRunningTime="2026-02-02 09:17:16.817348954 +0000 UTC m=+1270.672974510" Feb 02 09:17:16 crc kubenswrapper[4720]: E0202 09:17:16.822165 4720 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3212dd944b51b74f72af1ac669e67e8d125018a1b411f332e712d861fd8f498c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3212dd944b51b74f72af1ac669e67e8d125018a1b411f332e712d861fd8f498c/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-external-api-0_0fbf42a7-347e-4355-afc5-4e70bbf58271/glance-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-external-api-0_0fbf42a7-347e-4355-afc5-4e70bbf58271/glance-httpd/0.log: no such file or directory Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.540971 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8565544576-78c6h" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582244 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vhqsn"] Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582732 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-httpd" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582753 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-httpd" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582769 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-api" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582779 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-api" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582803 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b62cf19-56cf-4b24-bf4b-417906e61501" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582811 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b62cf19-56cf-4b24-bf4b-417906e61501" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582829 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db10941-ba5e-445a-a995-bd1493d5270c" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582836 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db10941-ba5e-445a-a995-bd1493d5270c" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582850 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20dd138-dcb5-4c76-905c-b9eb86dfd50b" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582858 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20dd138-dcb5-4c76-905c-b9eb86dfd50b" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582886 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582895 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582910 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87af8537-923b-4bee-8c85-aa7f3d179b6d" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582920 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="87af8537-923b-4bee-8c85-aa7f3d179b6d" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.582955 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1550113c-09da-4c3e-9ee1-cd4f28eaa995" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.582964 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1550113c-09da-4c3e-9ee1-cd4f28eaa995" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583186 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-api" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583204 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583226 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="87af8537-923b-4bee-8c85-aa7f3d179b6d" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583243 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerName="neutron-httpd" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583254 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1550113c-09da-4c3e-9ee1-cd4f28eaa995" containerName="mariadb-account-create-update" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583268 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b62cf19-56cf-4b24-bf4b-417906e61501" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583286 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20dd138-dcb5-4c76-905c-b9eb86dfd50b" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.583302 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db10941-ba5e-445a-a995-bd1493d5270c" containerName="mariadb-database-create" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.584100 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.588632 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.590243 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-combined-ca-bundle\") pod \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.590420 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-httpd-config\") pod \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.590453 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-ovndb-tls-certs\") pod \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.590551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-config\") pod \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.590582 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn59z\" (UniqueName: \"kubernetes.io/projected/41b88e2b-4fc1-4047-b366-d5e8571a4c89-kube-api-access-pn59z\") pod \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\" (UID: \"41b88e2b-4fc1-4047-b366-d5e8571a4c89\") " Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.591216 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pkl7g" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.591231 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.599096 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vhqsn"] Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.602621 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b88e2b-4fc1-4047-b366-d5e8571a4c89-kube-api-access-pn59z" (OuterVolumeSpecName: "kube-api-access-pn59z") pod "41b88e2b-4fc1-4047-b366-d5e8571a4c89" (UID: "41b88e2b-4fc1-4047-b366-d5e8571a4c89"). InnerVolumeSpecName "kube-api-access-pn59z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.646720 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "41b88e2b-4fc1-4047-b366-d5e8571a4c89" (UID: "41b88e2b-4fc1-4047-b366-d5e8571a4c89"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.694080 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gmk\" (UniqueName: \"kubernetes.io/projected/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-kube-api-access-w2gmk\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.694269 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.694323 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-config-data\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.694434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-scripts\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.694589 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.694600 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn59z\" (UniqueName: \"kubernetes.io/projected/41b88e2b-4fc1-4047-b366-d5e8571a4c89-kube-api-access-pn59z\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.766168 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41b88e2b-4fc1-4047-b366-d5e8571a4c89" (UID: "41b88e2b-4fc1-4047-b366-d5e8571a4c89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.766276 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-config" (OuterVolumeSpecName: "config") pod "41b88e2b-4fc1-4047-b366-d5e8571a4c89" (UID: "41b88e2b-4fc1-4047-b366-d5e8571a4c89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.795970 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.796022 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-config-data\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.796074 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-scripts\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.796168 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gmk\" (UniqueName: \"kubernetes.io/projected/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-kube-api-access-w2gmk\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.796244 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.796261 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.801347 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.801588 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-config-data\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.806073 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-scripts\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.806085 4720 generic.go:334] "Generic (PLEG): container finished" podID="9f8bec59-9988-4424-9eab-98f0ea954808" containerID="27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452" exitCode=0 Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.806112 4720 generic.go:334] "Generic (PLEG): container finished" podID="9f8bec59-9988-4424-9eab-98f0ea954808" containerID="7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55" exitCode=1 Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.806194 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9f8bec59-9988-4424-9eab-98f0ea954808","Type":"ContainerDied","Data":"27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452"} Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.806222 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9f8bec59-9988-4424-9eab-98f0ea954808","Type":"ContainerDied","Data":"7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55"} Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.808828 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerStarted","Data":"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8"} Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.810037 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "41b88e2b-4fc1-4047-b366-d5e8571a4c89" (UID: "41b88e2b-4fc1-4047-b366-d5e8571a4c89"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.811179 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gmk\" (UniqueName: \"kubernetes.io/projected/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-kube-api-access-w2gmk\") pod \"nova-cell0-conductor-db-sync-vhqsn\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.823078 4720 generic.go:334] "Generic (PLEG): container finished" podID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" containerID="184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4" exitCode=0 Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.824076 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8565544576-78c6h" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.825109 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8565544576-78c6h" event={"ID":"41b88e2b-4fc1-4047-b366-d5e8571a4c89","Type":"ContainerDied","Data":"184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4"} Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.825154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8565544576-78c6h" event={"ID":"41b88e2b-4fc1-4047-b366-d5e8571a4c89","Type":"ContainerDied","Data":"3ad57bb47beba59b1316a2ef5142a21ebf9af947ff5e4d3da2a3d43619e37cf7"} Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.825173 4720 scope.go:117] "RemoveContainer" containerID="6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.879513 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8565544576-78c6h"] Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.902584 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.902637 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.902685 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.902873 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8565544576-78c6h"] Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.904186 4720 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b88e2b-4fc1-4047-b366-d5e8571a4c89-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.907651 4720 scope.go:117] "RemoveContainer" containerID="184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.909999 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.910073 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0" gracePeriod=600 Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.917858 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.931189 4720 scope.go:117] "RemoveContainer" containerID="6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.931618 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d\": container with ID starting with 6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d not found: ID does not exist" containerID="6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.931646 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d"} err="failed to get container status \"6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d\": rpc error: code = NotFound desc = could not find container \"6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d\": container with ID starting with 6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d not found: ID does not exist" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.931666 4720 scope.go:117] "RemoveContainer" containerID="184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4" Feb 02 09:17:17 crc kubenswrapper[4720]: E0202 09:17:17.932090 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4\": container with ID starting with 184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4 not found: ID does not exist" containerID="184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4" Feb 02 09:17:17 crc kubenswrapper[4720]: I0202 09:17:17.932111 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4"} err="failed to get container status \"184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4\": rpc error: code = NotFound desc = could not find container \"184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4\": container with ID starting with 184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4 not found: ID does not exist" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.191222 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.318469 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-etc-machine-id\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.318609 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.318833 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmlnx\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-kube-api-access-bmlnx\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.318872 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data-custom\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.318916 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-scripts\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.318966 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-combined-ca-bundle\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.319059 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-var-lib-manila\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.319101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-ceph\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.319139 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data\") pod \"9f8bec59-9988-4424-9eab-98f0ea954808\" (UID: \"9f8bec59-9988-4424-9eab-98f0ea954808\") " Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.319160 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.319584 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.319600 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9f8bec59-9988-4424-9eab-98f0ea954808-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.329070 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-scripts" (OuterVolumeSpecName: "scripts") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.335509 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-kube-api-access-bmlnx" (OuterVolumeSpecName: "kube-api-access-bmlnx") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "kube-api-access-bmlnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: E0202 09:17:18.335805 4720 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c59bf7c9c3c9350ed3ba0baa96100d94d4c170be905c4e8931f37fc789857c6b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c59bf7c9c3c9350ed3ba0baa96100d94d4c170be905c4e8931f37fc789857c6b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-internal-api-0_687b9563-476f-485d-bfd5-8f874470c4f2/glance-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-internal-api-0_687b9563-476f-485d-bfd5-8f874470c4f2/glance-httpd/0.log: no such file or directory Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.342682 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-ceph" (OuterVolumeSpecName: "ceph") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.348402 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.381346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.448091 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmlnx\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-kube-api-access-bmlnx\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.448119 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.448128 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.448137 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.448147 4720 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9f8bec59-9988-4424-9eab-98f0ea954808-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.471895 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data" (OuterVolumeSpecName: "config-data") pod "9f8bec59-9988-4424-9eab-98f0ea954808" (UID: "9f8bec59-9988-4424-9eab-98f0ea954808"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.503262 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vhqsn"] Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.549527 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8bec59-9988-4424-9eab-98f0ea954808-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.834141 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9f8bec59-9988-4424-9eab-98f0ea954808","Type":"ContainerDied","Data":"1af5a3ee989b96b5f3f7836f23b8e11841050f085481193a98e1aefc5a9ab090"} Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.834178 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.834453 4720 scope.go:117] "RemoveContainer" containerID="27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.835819 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" event={"ID":"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb","Type":"ContainerStarted","Data":"3aecedb74b38991061613d93ed6e210b282928bf3e76833a4d6a1f0c20b92d9e"} Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.839059 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0" exitCode=0 Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.839129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0"} Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.839153 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"ab31c90e1e148f73f162e2be60fd4d3028bdf40b46acc10afc7a7e25161d4a04"} Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.871253 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.878515 4720 scope.go:117] "RemoveContainer" containerID="7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.887559 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.905626 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b88e2b-4fc1-4047-b366-d5e8571a4c89" path="/var/lib/kubelet/pods/41b88e2b-4fc1-4047-b366-d5e8571a4c89/volumes" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.906649 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" path="/var/lib/kubelet/pods/9f8bec59-9988-4424-9eab-98f0ea954808/volumes" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.907305 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:17:18 crc kubenswrapper[4720]: E0202 09:17:18.907654 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="probe" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.907668 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="probe" Feb 02 09:17:18 crc kubenswrapper[4720]: E0202 09:17:18.907688 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="manila-share" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.907696 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="manila-share" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.907874 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="manila-share" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.907899 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8bec59-9988-4424-9eab-98f0ea954808" containerName="probe" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.911713 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.914753 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.918320 4720 scope.go:117] "RemoveContainer" containerID="33b2587eeea210938842b756c82dc97d447412bb2884bc249a32550e7a5523ff" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.921286 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957540 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07e6a921-0f7c-40b4-9136-549d1cdf45c1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957719 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/07e6a921-0f7c-40b4-9136-549d1cdf45c1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957750 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-scripts\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957784 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rr7l\" (UniqueName: \"kubernetes.io/projected/07e6a921-0f7c-40b4-9136-549d1cdf45c1-kube-api-access-2rr7l\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957808 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-config-data\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957843 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07e6a921-0f7c-40b4-9136-549d1cdf45c1-ceph\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.957898 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:18 crc kubenswrapper[4720]: I0202 09:17:18.958029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.059568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07e6a921-0f7c-40b4-9136-549d1cdf45c1-ceph\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.059950 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.059978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.060011 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07e6a921-0f7c-40b4-9136-549d1cdf45c1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.060086 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/07e6a921-0f7c-40b4-9136-549d1cdf45c1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.060110 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-scripts\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.060134 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rr7l\" (UniqueName: \"kubernetes.io/projected/07e6a921-0f7c-40b4-9136-549d1cdf45c1-kube-api-access-2rr7l\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.060152 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-config-data\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.060945 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/07e6a921-0f7c-40b4-9136-549d1cdf45c1-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.061067 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07e6a921-0f7c-40b4-9136-549d1cdf45c1-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.064631 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-scripts\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.064725 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/07e6a921-0f7c-40b4-9136-549d1cdf45c1-ceph\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.065923 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.066403 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.069643 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e6a921-0f7c-40b4-9136-549d1cdf45c1-config-data\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.083491 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rr7l\" (UniqueName: \"kubernetes.io/projected/07e6a921-0f7c-40b4-9136-549d1cdf45c1-kube-api-access-2rr7l\") pod \"manila-share-share1-0\" (UID: \"07e6a921-0f7c-40b4-9136-549d1cdf45c1\") " pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.238429 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.777787 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.863673 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerStarted","Data":"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7"} Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.863831 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-central-agent" containerID="cri-o://a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" gracePeriod=30 Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.864097 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.864327 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="proxy-httpd" containerID="cri-o://dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" gracePeriod=30 Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.864375 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="sg-core" containerID="cri-o://f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" gracePeriod=30 Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.864408 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-notification-agent" containerID="cri-o://d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" gracePeriod=30 Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.872914 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"07e6a921-0f7c-40b4-9136-549d1cdf45c1","Type":"ContainerStarted","Data":"bb50ea7966e6ee11d23bef9f0192e7ae1ec4c137810de5e5af85a47ac45c71f7"} Feb 02 09:17:19 crc kubenswrapper[4720]: I0202 09:17:19.897589 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.640540951 podStartE2EDuration="7.897570648s" podCreationTimestamp="2026-02-02 09:17:12 +0000 UTC" firstStartedPulling="2026-02-02 09:17:14.058799248 +0000 UTC m=+1267.914424804" lastFinishedPulling="2026-02-02 09:17:19.315828935 +0000 UTC m=+1273.171454501" observedRunningTime="2026-02-02 09:17:19.88668879 +0000 UTC m=+1273.742314346" watchObservedRunningTime="2026-02-02 09:17:19.897570648 +0000 UTC m=+1273.753196204" Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.597020 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice/crio-83441b6406ccdbc1f9926715144256e507082a919707d7aab5513abb2775da28": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice/crio-83441b6406ccdbc1f9926715144256e507082a919707d7aab5513abb2775da28: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.597646 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice/crio-f0aec05d3a20da59ccb47f05d12a40aec67abe82ef1f1e2420564058f1836915": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice/crio-f0aec05d3a20da59ccb47f05d12a40aec67abe82ef1f1e2420564058f1836915: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.597667 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fd9095_5cd6_4a99_a5a0_fda750c1a6b7.slice/crio-conmon-42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fd9095_5cd6_4a99_a5a0_fda750c1a6b7.slice/crio-conmon-42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.598680 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fd9095_5cd6_4a99_a5a0_fda750c1a6b7.slice/crio-42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fd9095_5cd6_4a99_a5a0_fda750c1a6b7.slice/crio-42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.598715 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62cf19_56cf_4b24_bf4b_417906e61501.slice/crio-conmon-24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62cf19_56cf_4b24_bf4b_417906e61501.slice/crio-conmon-24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.598772 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db10941_ba5e_445a_a995_bd1493d5270c.slice/crio-conmon-d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db10941_ba5e_445a_a995_bd1493d5270c.slice/crio-conmon-d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.598795 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62cf19_56cf_4b24_bf4b_417906e61501.slice/crio-24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62cf19_56cf_4b24_bf4b_417906e61501.slice/crio-24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.599518 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice/crio-conmon-5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice/crio-conmon-5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.599580 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db10941_ba5e_445a_a995_bd1493d5270c.slice/crio-d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db10941_ba5e_445a_a995_bd1493d5270c.slice/crio-d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: E0202 09:17:20.605459 4720 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687b9563_476f_485d_bfd5_8f874470c4f2.slice/crio-3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371: Error finding container 3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371: Status 404 returned error can't find the container with id 3727913ec7154fbbc3338d4572901b8d0ee2687dae2ed81049b1aa6946117371 Feb 02 09:17:20 crc kubenswrapper[4720]: E0202 09:17:20.605769 4720 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fbf42a7_347e_4355_afc5_4e70bbf58271.slice/crio-d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e: Error finding container d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e: Status 404 returned error can't find the container with id d8ef914a333f88faee58a7c156e3ae897cfa5b6693bb6a2d4a43a2ee8e7e927e Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.607163 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice/crio-conmon-ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice/crio-conmon-ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.607201 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice/crio-5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice/crio-5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.607223 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87af8537_923b_4bee_8c85_aa7f3d179b6d.slice/crio-conmon-40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87af8537_923b_4bee_8c85_aa7f3d179b6d.slice/crio-conmon-40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.607236 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice/crio-ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice/crio-ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: W0202 09:17:20.607255 4720 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87af8537_923b_4bee_8c85_aa7f3d179b6d.slice/crio-40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87af8537_923b_4bee_8c85_aa7f3d179b6d.slice/crio-40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad.scope: no such file or directory Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.616722 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.796736 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-config-data\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.797838 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-combined-ca-bundle\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.798010 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-log-httpd\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.798225 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-sg-core-conf-yaml\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.798353 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-scripts\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.798444 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-run-httpd\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.798541 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrhfc\" (UniqueName: \"kubernetes.io/projected/2d52497c-e7d6-4a84-a54a-7d463a7038df-kube-api-access-hrhfc\") pod \"2d52497c-e7d6-4a84-a54a-7d463a7038df\" (UID: \"2d52497c-e7d6-4a84-a54a-7d463a7038df\") " Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.799427 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.800125 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.803034 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-scripts" (OuterVolumeSpecName: "scripts") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.813702 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d52497c-e7d6-4a84-a54a-7d463a7038df-kube-api-access-hrhfc" (OuterVolumeSpecName: "kube-api-access-hrhfc") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "kube-api-access-hrhfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.842013 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.884719 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"07e6a921-0f7c-40b4-9136-549d1cdf45c1","Type":"ContainerStarted","Data":"5fc0217748a61a5affe9431c8662659401ef63be509cb8178c551980fca674f9"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.893855 4720 generic.go:334] "Generic (PLEG): container finished" podID="e7ea3e29-f479-4d19-9200-476ab329c100" containerID="7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269" exitCode=137 Feb 02 09:17:20 crc kubenswrapper[4720]: E0202 09:17:20.896407 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fd9095_5cd6_4a99_a5a0_fda750c1a6b7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87af8537_923b_4bee_8c85_aa7f3d179b6d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8bec59_9988_4424_9eab_98f0ea954808.slice/crio-7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b88e2b_4fc1_4047_b366_d5e8571a4c89.slice/crio-conmon-184b6116ce9fde1e0cc5ef985f037449fc1dcc32013118c93aa83a629c92b7d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1550113c_09da_4c3e_9ee1_cd4f28eaa995.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62cf19_56cf_4b24_bf4b_417906e61501.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b62cf19_56cf_4b24_bf4b_417906e61501.slice/crio-d246bb8e7b3b3133a30c7273db390b7165d4a83ce46ed66e2f7345bfe53da290\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8bec59_9988_4424_9eab_98f0ea954808.slice/crio-27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b88e2b_4fc1_4047_b366_d5e8571a4c89.slice/crio-3ad57bb47beba59b1316a2ef5142a21ebf9af947ff5e4d3da2a3d43619e37cf7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20dd138_dcb5_4c76_905c_b9eb86dfd50b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b88e2b_4fc1_4047_b366_d5e8571a4c89.slice/crio-conmon-6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ea3e29_f479_4d19_9200_476ab329c100.slice/crio-conmon-7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b88e2b_4fc1_4047_b366_d5e8571a4c89.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0342796d_ac1a_4cfa_8666_1c772eab1ed2.slice/crio-conmon-06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc972d9_758e_4a27_9d67_ce14b4ece48b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb2b63a_ae1d_4400_877d_92cacdddfcbe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db10941_ba5e_445a_a995_bd1493d5270c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8bec59_9988_4424_9eab_98f0ea954808.slice/crio-1af5a3ee989b96b5f3f7836f23b8e11841050f085481193a98e1aefc5a9ab090\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ea3e29_f479_4d19_9200_476ab329c100.slice/crio-7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0342796d_ac1a_4cfa_8666_1c772eab1ed2.slice/crio-06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb2b63a_ae1d_4400_877d_92cacdddfcbe.slice/crio-46847029799ff372116831cbeffd6df8e0a028cfc31628b0546d6a33d871d2e6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b88e2b_4fc1_4047_b366_d5e8571a4c89.slice/crio-6800d93145b9d9fceef5b237937f37b3803666ff562e783556515058c03a2d5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bc972d9_758e_4a27_9d67_ce14b4ece48b.slice/crio-5c72e8f4c8d4762000ea969511156ec8177a09bc86912860d82ba12990b2a912\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87af8537_923b_4bee_8c85_aa7f3d179b6d.slice/crio-4703a8751a041adc0461ab9ca0d48f9fa6c48eca341014be6534c4c7f573f9e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8bec59_9988_4424_9eab_98f0ea954808.slice/crio-conmon-7bda73fdf37787a7b848a07097c4add3a3111119a9f21d45aa96f535c9447f55.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f8bec59_9988_4424_9eab_98f0ea954808.slice/crio-conmon-27de9ad9caac2d5143731675efddc80dc7fe6078c642409f6cc5c7b36810d452.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db10941_ba5e_445a_a995_bd1493d5270c.slice/crio-fca74e70efcc3b440b347108cf13a7824bb88d4440282246ec348620c818e140\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fd9095_5cd6_4a99_a5a0_fda750c1a6b7.slice/crio-4287574e4225381fe530a38ed3c97bde02735f928f3de6e4ca52ad00bd4135d6\": RecentStats: unable to find data in memory cache]" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.901121 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.901284 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.901431 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.901916 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.901940 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrhfc\" (UniqueName: \"kubernetes.io/projected/2d52497c-e7d6-4a84-a54a-7d463a7038df-kube-api-access-hrhfc\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.901953 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d52497c-e7d6-4a84-a54a-7d463a7038df-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.902059 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" exitCode=0 Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.902083 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" exitCode=2 Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.902094 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" exitCode=0 Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.902101 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" exitCode=0 Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.902163 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904102 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b896b6bb4-gxblv" event={"ID":"e7ea3e29-f479-4d19-9200-476ab329c100","Type":"ContainerDied","Data":"7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerDied","Data":"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904144 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerDied","Data":"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904153 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerDied","Data":"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904162 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerDied","Data":"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904170 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d52497c-e7d6-4a84-a54a-7d463a7038df","Type":"ContainerDied","Data":"83312e3600bbcba858784ce8cdcd366cf50a1d62885d8c67c341956b8d8bda23"} Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.904188 4720 scope.go:117] "RemoveContainer" containerID="cfa0d360cae26b0c2a8dc8dbb5704822a8b671b94feadfbd85eda5647e826c27" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.925000 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:20 crc kubenswrapper[4720]: I0202 09:17:20.954752 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-config-data" (OuterVolumeSpecName: "config-data") pod "2d52497c-e7d6-4a84-a54a-7d463a7038df" (UID: "2d52497c-e7d6-4a84-a54a-7d463a7038df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003454 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7ea3e29-f479-4d19-9200-476ab329c100-logs\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003507 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-secret-key\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003572 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-scripts\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003633 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-tls-certs\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003653 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-combined-ca-bundle\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003699 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-config-data\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.003752 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d62b\" (UniqueName: \"kubernetes.io/projected/e7ea3e29-f479-4d19-9200-476ab329c100-kube-api-access-9d62b\") pod \"e7ea3e29-f479-4d19-9200-476ab329c100\" (UID: \"e7ea3e29-f479-4d19-9200-476ab329c100\") " Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.004233 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.004252 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d52497c-e7d6-4a84-a54a-7d463a7038df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.004993 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ea3e29-f479-4d19-9200-476ab329c100-logs" (OuterVolumeSpecName: "logs") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.011124 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.012473 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ea3e29-f479-4d19-9200-476ab329c100-kube-api-access-9d62b" (OuterVolumeSpecName: "kube-api-access-9d62b") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "kube-api-access-9d62b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.036045 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-scripts" (OuterVolumeSpecName: "scripts") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.036439 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-config-data" (OuterVolumeSpecName: "config-data") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.038086 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.073085 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e7ea3e29-f479-4d19-9200-476ab329c100" (UID: "e7ea3e29-f479-4d19-9200-476ab329c100"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107218 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107250 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107279 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107289 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ea3e29-f479-4d19-9200-476ab329c100-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107300 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d62b\" (UniqueName: \"kubernetes.io/projected/e7ea3e29-f479-4d19-9200-476ab329c100-kube-api-access-9d62b\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107309 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7ea3e29-f479-4d19-9200-476ab329c100-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.107317 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e7ea3e29-f479-4d19-9200-476ab329c100-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.120683 4720 scope.go:117] "RemoveContainer" containerID="7c37c07c1c0c1ecb76b5afffa9661fbc87c5b27354a0eb0945177e1649149269" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.166434 4720 scope.go:117] "RemoveContainer" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.195084 4720 scope.go:117] "RemoveContainer" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.227869 4720 scope.go:117] "RemoveContainer" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.233476 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.241095 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253427 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.253793 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-central-agent" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253810 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-central-agent" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.253822 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="sg-core" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253828 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="sg-core" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.253846 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon-log" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253854 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon-log" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.253874 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="proxy-httpd" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253882 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="proxy-httpd" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.253895 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-notification-agent" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253912 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-notification-agent" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.253925 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.253931 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.254090 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="sg-core" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.254104 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="proxy-httpd" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.254114 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.254130 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" containerName="horizon-log" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.254139 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-notification-agent" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.254149 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" containerName="ceilometer-central-agent" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.264623 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.264712 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.305845 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.306032 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312517 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zclmq\" (UniqueName: \"kubernetes.io/projected/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-kube-api-access-zclmq\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312546 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-scripts\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312623 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312640 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-config-data\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312702 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.312754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.332495 4720 scope.go:117] "RemoveContainer" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.361526 4720 scope.go:117] "RemoveContainer" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.362003 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": container with ID starting with dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7 not found: ID does not exist" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.362053 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7"} err="failed to get container status \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": rpc error: code = NotFound desc = could not find container \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": container with ID starting with dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.362081 4720 scope.go:117] "RemoveContainer" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.362365 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": container with ID starting with f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8 not found: ID does not exist" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.362388 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8"} err="failed to get container status \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": rpc error: code = NotFound desc = could not find container \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": container with ID starting with f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.362400 4720 scope.go:117] "RemoveContainer" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.362692 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": container with ID starting with d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25 not found: ID does not exist" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.362715 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25"} err="failed to get container status \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": rpc error: code = NotFound desc = could not find container \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": container with ID starting with d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.362729 4720 scope.go:117] "RemoveContainer" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" Feb 02 09:17:21 crc kubenswrapper[4720]: E0202 09:17:21.363047 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": container with ID starting with a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4 not found: ID does not exist" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363069 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4"} err="failed to get container status \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": rpc error: code = NotFound desc = could not find container \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": container with ID starting with a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363081 4720 scope.go:117] "RemoveContainer" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363283 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7"} err="failed to get container status \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": rpc error: code = NotFound desc = could not find container \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": container with ID starting with dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363305 4720 scope.go:117] "RemoveContainer" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363552 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8"} err="failed to get container status \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": rpc error: code = NotFound desc = could not find container \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": container with ID starting with f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363574 4720 scope.go:117] "RemoveContainer" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363800 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25"} err="failed to get container status \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": rpc error: code = NotFound desc = could not find container \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": container with ID starting with d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.363819 4720 scope.go:117] "RemoveContainer" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364108 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4"} err="failed to get container status \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": rpc error: code = NotFound desc = could not find container \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": container with ID starting with a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364127 4720 scope.go:117] "RemoveContainer" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364367 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7"} err="failed to get container status \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": rpc error: code = NotFound desc = could not find container \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": container with ID starting with dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364388 4720 scope.go:117] "RemoveContainer" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364531 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8"} err="failed to get container status \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": rpc error: code = NotFound desc = could not find container \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": container with ID starting with f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364548 4720 scope.go:117] "RemoveContainer" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364799 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25"} err="failed to get container status \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": rpc error: code = NotFound desc = could not find container \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": container with ID starting with d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.364817 4720 scope.go:117] "RemoveContainer" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.365110 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4"} err="failed to get container status \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": rpc error: code = NotFound desc = could not find container \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": container with ID starting with a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.365138 4720 scope.go:117] "RemoveContainer" containerID="dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.365319 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7"} err="failed to get container status \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": rpc error: code = NotFound desc = could not find container \"dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7\": container with ID starting with dda18e6957e64e95710f9cd65f011a3a6b47890982cf15c7f99de633ccd743e7 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.365336 4720 scope.go:117] "RemoveContainer" containerID="f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.365822 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8"} err="failed to get container status \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": rpc error: code = NotFound desc = could not find container \"f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8\": container with ID starting with f7e9ef67ad72f13dbdb83e1a4941e7ffb02a8cf8a87cf655c26f2791fae34ba8 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.365843 4720 scope.go:117] "RemoveContainer" containerID="d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.366087 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25"} err="failed to get container status \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": rpc error: code = NotFound desc = could not find container \"d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25\": container with ID starting with d6c63b0b0605c2167c8b913663c5694c20a22b0c9d6dea20234d2e5517ee7b25 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.366107 4720 scope.go:117] "RemoveContainer" containerID="a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.366383 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4"} err="failed to get container status \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": rpc error: code = NotFound desc = could not find container \"a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4\": container with ID starting with a2e105e66a007440b64a21669468323e86b08269bdc00181e6d5b2daa916a5f4 not found: ID does not exist" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.414973 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.415076 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.415120 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-scripts\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.415138 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zclmq\" (UniqueName: \"kubernetes.io/projected/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-kube-api-access-zclmq\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.415215 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.415232 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.415249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-config-data\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.416015 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.416202 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.420445 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.422689 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.422958 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-scripts\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.423129 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-config-data\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.439607 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zclmq\" (UniqueName: \"kubernetes.io/projected/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-kube-api-access-zclmq\") pod \"ceilometer-0\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.622570 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.914107 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b896b6bb4-gxblv" event={"ID":"e7ea3e29-f479-4d19-9200-476ab329c100","Type":"ContainerDied","Data":"8a2a69580026bab7ef2ae60850efb652a7b5dc1ebfc0d1a560b743bc40b9adbc"} Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.914398 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b896b6bb4-gxblv" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.923713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"07e6a921-0f7c-40b4-9136-549d1cdf45c1","Type":"ContainerStarted","Data":"3a0765184c93609080235b3760a2fb9eed4b71a62df9a987a56d224a881f78db"} Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.941023 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.941006359 podStartE2EDuration="3.941006359s" podCreationTimestamp="2026-02-02 09:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:21.938692224 +0000 UTC m=+1275.794317780" watchObservedRunningTime="2026-02-02 09:17:21.941006359 +0000 UTC m=+1275.796631915" Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.965793 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b896b6bb4-gxblv"] Feb 02 09:17:21 crc kubenswrapper[4720]: I0202 09:17:21.973620 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b896b6bb4-gxblv"] Feb 02 09:17:22 crc kubenswrapper[4720]: I0202 09:17:22.073528 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:22 crc kubenswrapper[4720]: W0202 09:17:22.088002 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2d9eab8_3e8c_4246_b6dd_cbfab47686bc.slice/crio-da074ee05278b64f192df7b3da6617a0167ed31e13e274c6a49fc0936f7c1bec WatchSource:0}: Error finding container da074ee05278b64f192df7b3da6617a0167ed31e13e274c6a49fc0936f7c1bec: Status 404 returned error can't find the container with id da074ee05278b64f192df7b3da6617a0167ed31e13e274c6a49fc0936f7c1bec Feb 02 09:17:22 crc kubenswrapper[4720]: I0202 09:17:22.920791 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d52497c-e7d6-4a84-a54a-7d463a7038df" path="/var/lib/kubelet/pods/2d52497c-e7d6-4a84-a54a-7d463a7038df/volumes" Feb 02 09:17:22 crc kubenswrapper[4720]: I0202 09:17:22.922507 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ea3e29-f479-4d19-9200-476ab329c100" path="/var/lib/kubelet/pods/e7ea3e29-f479-4d19-9200-476ab329c100/volumes" Feb 02 09:17:22 crc kubenswrapper[4720]: I0202 09:17:22.937474 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerStarted","Data":"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69"} Feb 02 09:17:22 crc kubenswrapper[4720]: I0202 09:17:22.937755 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerStarted","Data":"da074ee05278b64f192df7b3da6617a0167ed31e13e274c6a49fc0936f7c1bec"} Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.212416 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.212764 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.247393 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.247435 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.257150 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.263974 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.322045 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.351757 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.951178 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.951227 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.951245 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:23 crc kubenswrapper[4720]: I0202 09:17:23.951259 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.137010 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.321728 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.324007 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c8cc9866d-t5g2d" Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.435720 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6959d6dc4b-9m4m5"] Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.436273 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6959d6dc4b-9m4m5" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-log" containerID="cri-o://74819d035a8da316b52927667030cc4186809210f7b0bb4d280153462e8c5e32" gracePeriod=30 Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.436407 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6959d6dc4b-9m4m5" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-api" containerID="cri-o://d1a02e0c0ed1b597d39e63e523466196cfcc20178d1d7c7d1da6563051c6aeb7" gracePeriod=30 Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.958463 4720 generic.go:334] "Generic (PLEG): container finished" podID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerID="74819d035a8da316b52927667030cc4186809210f7b0bb4d280153462e8c5e32" exitCode=143 Feb 02 09:17:24 crc kubenswrapper[4720]: I0202 09:17:24.958524 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6959d6dc4b-9m4m5" event={"ID":"b103c33c-ef07-41f8-b969-e665fc7bedf1","Type":"ContainerDied","Data":"74819d035a8da316b52927667030cc4186809210f7b0bb4d280153462e8c5e32"} Feb 02 09:17:25 crc kubenswrapper[4720]: I0202 09:17:25.965978 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:17:25 crc kubenswrapper[4720]: I0202 09:17:25.966002 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:17:26 crc kubenswrapper[4720]: I0202 09:17:26.039078 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 09:17:26 crc kubenswrapper[4720]: I0202 09:17:26.039335 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 09:17:26 crc kubenswrapper[4720]: I0202 09:17:26.049949 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 09:17:26 crc kubenswrapper[4720]: I0202 09:17:26.117692 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:26 crc kubenswrapper[4720]: I0202 09:17:26.123668 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 09:17:28 crc kubenswrapper[4720]: I0202 09:17:28.002645 4720 generic.go:334] "Generic (PLEG): container finished" podID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerID="d1a02e0c0ed1b597d39e63e523466196cfcc20178d1d7c7d1da6563051c6aeb7" exitCode=0 Feb 02 09:17:28 crc kubenswrapper[4720]: I0202 09:17:28.004202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6959d6dc4b-9m4m5" event={"ID":"b103c33c-ef07-41f8-b969-e665fc7bedf1","Type":"ContainerDied","Data":"d1a02e0c0ed1b597d39e63e523466196cfcc20178d1d7c7d1da6563051c6aeb7"} Feb 02 09:17:28 crc kubenswrapper[4720]: I0202 09:17:28.110723 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.139673 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.239141 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.287344 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-config-data\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.287481 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwk5\" (UniqueName: \"kubernetes.io/projected/b103c33c-ef07-41f8-b969-e665fc7bedf1-kube-api-access-rcwk5\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.288344 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-internal-tls-certs\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.288450 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-scripts\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.288494 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-public-tls-certs\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.288544 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-combined-ca-bundle\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.288602 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103c33c-ef07-41f8-b969-e665fc7bedf1-logs\") pod \"b103c33c-ef07-41f8-b969-e665fc7bedf1\" (UID: \"b103c33c-ef07-41f8-b969-e665fc7bedf1\") " Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.289916 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b103c33c-ef07-41f8-b969-e665fc7bedf1-logs" (OuterVolumeSpecName: "logs") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.293202 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b103c33c-ef07-41f8-b969-e665fc7bedf1-kube-api-access-rcwk5" (OuterVolumeSpecName: "kube-api-access-rcwk5") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "kube-api-access-rcwk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.295033 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-scripts" (OuterVolumeSpecName: "scripts") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.355127 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-config-data" (OuterVolumeSpecName: "config-data") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.358860 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.391440 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.391474 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103c33c-ef07-41f8-b969-e665fc7bedf1-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.391484 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.391494 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwk5\" (UniqueName: \"kubernetes.io/projected/b103c33c-ef07-41f8-b969-e665fc7bedf1-kube-api-access-rcwk5\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.391504 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.411949 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.418734 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b103c33c-ef07-41f8-b969-e665fc7bedf1" (UID: "b103c33c-ef07-41f8-b969-e665fc7bedf1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.493910 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:29 crc kubenswrapper[4720]: I0202 09:17:29.494965 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103c33c-ef07-41f8-b969-e665fc7bedf1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.031255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerStarted","Data":"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262"} Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.031737 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerStarted","Data":"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151"} Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.034314 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6959d6dc4b-9m4m5" event={"ID":"b103c33c-ef07-41f8-b969-e665fc7bedf1","Type":"ContainerDied","Data":"595a82bf1e95abb35f082946480016954302c2bc7874c41209c07766f145f2e8"} Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.034396 4720 scope.go:117] "RemoveContainer" containerID="d1a02e0c0ed1b597d39e63e523466196cfcc20178d1d7c7d1da6563051c6aeb7" Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.034570 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6959d6dc4b-9m4m5" Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.039279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" event={"ID":"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb","Type":"ContainerStarted","Data":"88b2755c6980964cff99cdd98f32f6d0011c8f8313a7b742e3e8d892d180a360"} Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.070907 4720 scope.go:117] "RemoveContainer" containerID="74819d035a8da316b52927667030cc4186809210f7b0bb4d280153462e8c5e32" Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.073491 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" podStartSLOduration=2.728626803 podStartE2EDuration="13.073468443s" podCreationTimestamp="2026-02-02 09:17:17 +0000 UTC" firstStartedPulling="2026-02-02 09:17:18.511956895 +0000 UTC m=+1272.367582451" lastFinishedPulling="2026-02-02 09:17:28.856798535 +0000 UTC m=+1282.712424091" observedRunningTime="2026-02-02 09:17:30.064148922 +0000 UTC m=+1283.919774488" watchObservedRunningTime="2026-02-02 09:17:30.073468443 +0000 UTC m=+1283.929093999" Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.099174 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6959d6dc4b-9m4m5"] Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.107550 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6959d6dc4b-9m4m5"] Feb 02 09:17:30 crc kubenswrapper[4720]: I0202 09:17:30.898220 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" path="/var/lib/kubelet/pods/b103c33c-ef07-41f8-b969-e665fc7bedf1/volumes" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.089219 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerStarted","Data":"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba"} Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.089938 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.089746 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="sg-core" containerID="cri-o://52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" gracePeriod=30 Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.089441 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-central-agent" containerID="cri-o://d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" gracePeriod=30 Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.089811 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="proxy-httpd" containerID="cri-o://aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" gracePeriod=30 Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.089774 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-notification-agent" containerID="cri-o://50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" gracePeriod=30 Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.132776 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.250503797 podStartE2EDuration="12.13275864s" podCreationTimestamp="2026-02-02 09:17:21 +0000 UTC" firstStartedPulling="2026-02-02 09:17:22.092159183 +0000 UTC m=+1275.947784739" lastFinishedPulling="2026-02-02 09:17:31.974414006 +0000 UTC m=+1285.830039582" observedRunningTime="2026-02-02 09:17:33.124996497 +0000 UTC m=+1286.980622053" watchObservedRunningTime="2026-02-02 09:17:33.13275864 +0000 UTC m=+1286.988384196" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.806906 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.878806 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-log-httpd\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.878922 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zclmq\" (UniqueName: \"kubernetes.io/projected/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-kube-api-access-zclmq\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.878943 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-run-httpd\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.878987 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-config-data\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.879045 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-scripts\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.879087 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-sg-core-conf-yaml\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.879131 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-combined-ca-bundle\") pod \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\" (UID: \"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc\") " Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.879262 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.879520 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.880018 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.884157 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-kube-api-access-zclmq" (OuterVolumeSpecName: "kube-api-access-zclmq") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "kube-api-access-zclmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.884360 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-scripts" (OuterVolumeSpecName: "scripts") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.905717 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.950730 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.980692 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zclmq\" (UniqueName: \"kubernetes.io/projected/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-kube-api-access-zclmq\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.980718 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.980727 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.980735 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:33 crc kubenswrapper[4720]: I0202 09:17:33.980743 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.002495 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-config-data" (OuterVolumeSpecName: "config-data") pod "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" (UID: "b2d9eab8-3e8c-4246-b6dd-cbfab47686bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.082959 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103638 4720 generic.go:334] "Generic (PLEG): container finished" podID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" exitCode=0 Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103673 4720 generic.go:334] "Generic (PLEG): container finished" podID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" exitCode=2 Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103681 4720 generic.go:334] "Generic (PLEG): container finished" podID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" exitCode=0 Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103689 4720 generic.go:334] "Generic (PLEG): container finished" podID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" exitCode=0 Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerDied","Data":"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba"} Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerDied","Data":"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262"} Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103784 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerDied","Data":"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151"} Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerDied","Data":"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69"} Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103831 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2d9eab8-3e8c-4246-b6dd-cbfab47686bc","Type":"ContainerDied","Data":"da074ee05278b64f192df7b3da6617a0167ed31e13e274c6a49fc0936f7c1bec"} Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103852 4720 scope.go:117] "RemoveContainer" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.103936 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.132380 4720 scope.go:117] "RemoveContainer" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.154314 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.168529 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.173985 4720 scope.go:117] "RemoveContainer" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.178579 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.179076 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="sg-core" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179089 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="sg-core" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.179101 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-notification-agent" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179121 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-notification-agent" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.179137 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-log" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179143 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-log" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.179159 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="proxy-httpd" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179165 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="proxy-httpd" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.179183 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-api" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179188 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-api" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.179195 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-central-agent" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179201 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-central-agent" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179370 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-central-agent" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179385 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="sg-core" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179398 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-api" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179407 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b103c33c-ef07-41f8-b969-e665fc7bedf1" containerName="placement-log" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179418 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="ceilometer-notification-agent" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.179436 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" containerName="proxy-httpd" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.181242 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.184634 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.184661 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.186703 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.226243 4720 scope.go:117] "RemoveContainer" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.276500 4720 scope.go:117] "RemoveContainer" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.276918 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": container with ID starting with aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba not found: ID does not exist" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.276948 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba"} err="failed to get container status \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": rpc error: code = NotFound desc = could not find container \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": container with ID starting with aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.276984 4720 scope.go:117] "RemoveContainer" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.277273 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": container with ID starting with 52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262 not found: ID does not exist" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.277309 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262"} err="failed to get container status \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": rpc error: code = NotFound desc = could not find container \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": container with ID starting with 52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.277328 4720 scope.go:117] "RemoveContainer" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.277561 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": container with ID starting with 50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151 not found: ID does not exist" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.277597 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151"} err="failed to get container status \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": rpc error: code = NotFound desc = could not find container \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": container with ID starting with 50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.277609 4720 scope.go:117] "RemoveContainer" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" Feb 02 09:17:34 crc kubenswrapper[4720]: E0202 09:17:34.277842 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": container with ID starting with d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69 not found: ID does not exist" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.277859 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69"} err="failed to get container status \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": rpc error: code = NotFound desc = could not find container \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": container with ID starting with d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.277872 4720 scope.go:117] "RemoveContainer" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.282050 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba"} err="failed to get container status \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": rpc error: code = NotFound desc = could not find container \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": container with ID starting with aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.282095 4720 scope.go:117] "RemoveContainer" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.282511 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262"} err="failed to get container status \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": rpc error: code = NotFound desc = could not find container \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": container with ID starting with 52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.282547 4720 scope.go:117] "RemoveContainer" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.282945 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151"} err="failed to get container status \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": rpc error: code = NotFound desc = could not find container \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": container with ID starting with 50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.282993 4720 scope.go:117] "RemoveContainer" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.283344 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69"} err="failed to get container status \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": rpc error: code = NotFound desc = could not find container \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": container with ID starting with d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.283366 4720 scope.go:117] "RemoveContainer" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.283814 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba"} err="failed to get container status \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": rpc error: code = NotFound desc = could not find container \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": container with ID starting with aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.283835 4720 scope.go:117] "RemoveContainer" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.284127 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262"} err="failed to get container status \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": rpc error: code = NotFound desc = could not find container \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": container with ID starting with 52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.284147 4720 scope.go:117] "RemoveContainer" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.284454 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151"} err="failed to get container status \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": rpc error: code = NotFound desc = could not find container \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": container with ID starting with 50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.284490 4720 scope.go:117] "RemoveContainer" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.287208 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69"} err="failed to get container status \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": rpc error: code = NotFound desc = could not find container \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": container with ID starting with d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.287230 4720 scope.go:117] "RemoveContainer" containerID="aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.287946 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba"} err="failed to get container status \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": rpc error: code = NotFound desc = could not find container \"aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba\": container with ID starting with aaad78948408b8145ffaffee59cb590bde06d046669e677f6cb71bd8b01066ba not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.287963 4720 scope.go:117] "RemoveContainer" containerID="52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288365 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262"} err="failed to get container status \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": rpc error: code = NotFound desc = could not find container \"52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262\": container with ID starting with 52b975329353ccb292005b8753f056235d2a0deb5d744a360840ad638cd0b262 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288415 4720 scope.go:117] "RemoveContainer" containerID="50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fgz4\" (UniqueName: \"kubernetes.io/projected/43eea4a1-b18e-43d4-bfd4-ff40289947ba-kube-api-access-5fgz4\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288482 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-log-httpd\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288501 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288679 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-config-data\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-scripts\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.288789 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-run-httpd\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.289018 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151"} err="failed to get container status \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": rpc error: code = NotFound desc = could not find container \"50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151\": container with ID starting with 50e5584fec8774fa187bf2df177d59a6b973beeabec1fb982adaaaa2c7ccb151 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.289037 4720 scope.go:117] "RemoveContainer" containerID="d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.289311 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69"} err="failed to get container status \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": rpc error: code = NotFound desc = could not find container \"d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69\": container with ID starting with d11bbf083e3dd71ecb9b56f8a49b44a12a0015eb8c834ed9fbc1a68eca2d1d69 not found: ID does not exist" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.391348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-config-data\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.391401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-scripts\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.391452 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-run-httpd\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.391653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.392214 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-run-httpd\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.392929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fgz4\" (UniqueName: \"kubernetes.io/projected/43eea4a1-b18e-43d4-bfd4-ff40289947ba-kube-api-access-5fgz4\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.396846 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-scripts\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.396797 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.396790 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-config-data\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.411058 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-log-httpd\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.411109 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.411924 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-log-httpd\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.413841 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fgz4\" (UniqueName: \"kubernetes.io/projected/43eea4a1-b18e-43d4-bfd4-ff40289947ba-kube-api-access-5fgz4\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.417947 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.504658 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.899201 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d9eab8-3e8c-4246-b6dd-cbfab47686bc" path="/var/lib/kubelet/pods/b2d9eab8-3e8c-4246-b6dd-cbfab47686bc/volumes" Feb 02 09:17:34 crc kubenswrapper[4720]: I0202 09:17:34.937407 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:34 crc kubenswrapper[4720]: W0202 09:17:34.941851 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43eea4a1_b18e_43d4_bfd4_ff40289947ba.slice/crio-aee303ef5a0181c5fc313134d644e39487ad1ae158d45445fd12d71e4aa51e8e WatchSource:0}: Error finding container aee303ef5a0181c5fc313134d644e39487ad1ae158d45445fd12d71e4aa51e8e: Status 404 returned error can't find the container with id aee303ef5a0181c5fc313134d644e39487ad1ae158d45445fd12d71e4aa51e8e Feb 02 09:17:35 crc kubenswrapper[4720]: I0202 09:17:35.121495 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerStarted","Data":"aee303ef5a0181c5fc313134d644e39487ad1ae158d45445fd12d71e4aa51e8e"} Feb 02 09:17:35 crc kubenswrapper[4720]: I0202 09:17:35.690402 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 09:17:35 crc kubenswrapper[4720]: I0202 09:17:35.944837 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:36 crc kubenswrapper[4720]: I0202 09:17:36.162503 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerStarted","Data":"d2fb3f518e48d10d45f50ac2190207c3fb8e6367dad83ab2e57b596264e33139"} Feb 02 09:17:37 crc kubenswrapper[4720]: I0202 09:17:37.173427 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerStarted","Data":"a4a63292486e0be0be2723968d01a0901d478ff761d0a830e3da91238830f2aa"} Feb 02 09:17:37 crc kubenswrapper[4720]: I0202 09:17:37.173781 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerStarted","Data":"cc8719c3cfd929a53016e2de3d38a982aaa8ff821343fef2df615fa5c0b415cf"} Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.208397 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerStarted","Data":"9e2890c74b7b9bdef85fdf3db047d21989999208ef88db68cfd95870cfb4d79b"} Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.208747 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-notification-agent" containerID="cri-o://cc8719c3cfd929a53016e2de3d38a982aaa8ff821343fef2df615fa5c0b415cf" gracePeriod=30 Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.208733 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-central-agent" containerID="cri-o://d2fb3f518e48d10d45f50ac2190207c3fb8e6367dad83ab2e57b596264e33139" gracePeriod=30 Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.208778 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="proxy-httpd" containerID="cri-o://9e2890c74b7b9bdef85fdf3db047d21989999208ef88db68cfd95870cfb4d79b" gracePeriod=30 Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.209067 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.208766 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="sg-core" containerID="cri-o://a4a63292486e0be0be2723968d01a0901d478ff761d0a830e3da91238830f2aa" gracePeriod=30 Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.239764 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.106511029 podStartE2EDuration="6.23973854s" podCreationTimestamp="2026-02-02 09:17:34 +0000 UTC" firstStartedPulling="2026-02-02 09:17:34.944391235 +0000 UTC m=+1288.800016791" lastFinishedPulling="2026-02-02 09:17:39.077618746 +0000 UTC m=+1292.933244302" observedRunningTime="2026-02-02 09:17:40.230250755 +0000 UTC m=+1294.085876311" watchObservedRunningTime="2026-02-02 09:17:40.23973854 +0000 UTC m=+1294.095364096" Feb 02 09:17:40 crc kubenswrapper[4720]: I0202 09:17:40.824506 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.217236 4720 generic.go:334] "Generic (PLEG): container finished" podID="f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" containerID="88b2755c6980964cff99cdd98f32f6d0011c8f8313a7b742e3e8d892d180a360" exitCode=0 Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.217303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" event={"ID":"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb","Type":"ContainerDied","Data":"88b2755c6980964cff99cdd98f32f6d0011c8f8313a7b742e3e8d892d180a360"} Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.220095 4720 generic.go:334] "Generic (PLEG): container finished" podID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerID="9e2890c74b7b9bdef85fdf3db047d21989999208ef88db68cfd95870cfb4d79b" exitCode=0 Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.220129 4720 generic.go:334] "Generic (PLEG): container finished" podID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerID="a4a63292486e0be0be2723968d01a0901d478ff761d0a830e3da91238830f2aa" exitCode=2 Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.220141 4720 generic.go:334] "Generic (PLEG): container finished" podID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerID="cc8719c3cfd929a53016e2de3d38a982aaa8ff821343fef2df615fa5c0b415cf" exitCode=0 Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.220156 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerDied","Data":"9e2890c74b7b9bdef85fdf3db047d21989999208ef88db68cfd95870cfb4d79b"} Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.220184 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerDied","Data":"a4a63292486e0be0be2723968d01a0901d478ff761d0a830e3da91238830f2aa"} Feb 02 09:17:41 crc kubenswrapper[4720]: I0202 09:17:41.220197 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerDied","Data":"cc8719c3cfd929a53016e2de3d38a982aaa8ff821343fef2df615fa5c0b415cf"} Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.618182 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.685573 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2gmk\" (UniqueName: \"kubernetes.io/projected/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-kube-api-access-w2gmk\") pod \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.686450 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-scripts\") pod \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.686520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-config-data\") pod \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.686545 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-combined-ca-bundle\") pod \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\" (UID: \"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb\") " Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.719116 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-kube-api-access-w2gmk" (OuterVolumeSpecName: "kube-api-access-w2gmk") pod "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" (UID: "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb"). InnerVolumeSpecName "kube-api-access-w2gmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.719128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-scripts" (OuterVolumeSpecName: "scripts") pod "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" (UID: "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.733040 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-config-data" (OuterVolumeSpecName: "config-data") pod "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" (UID: "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.755557 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" (UID: "f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.788494 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2gmk\" (UniqueName: \"kubernetes.io/projected/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-kube-api-access-w2gmk\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.788549 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.788559 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:42 crc kubenswrapper[4720]: I0202 09:17:42.788568 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.242046 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" event={"ID":"f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb","Type":"ContainerDied","Data":"3aecedb74b38991061613d93ed6e210b282928bf3e76833a4d6a1f0c20b92d9e"} Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.242407 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aecedb74b38991061613d93ed6e210b282928bf3e76833a4d6a1f0c20b92d9e" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.242482 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vhqsn" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.360067 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:43 crc kubenswrapper[4720]: E0202 09:17:43.360489 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" containerName="nova-cell0-conductor-db-sync" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.360504 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" containerName="nova-cell0-conductor-db-sync" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.360697 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" containerName="nova-cell0-conductor-db-sync" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.361296 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.363576 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.363919 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pkl7g" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.378814 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.504176 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.504549 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.504678 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvz72\" (UniqueName: \"kubernetes.io/projected/3286fd15-7691-4745-a5f3-24c50205f1e0-kube-api-access-vvz72\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.606534 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.606606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvz72\" (UniqueName: \"kubernetes.io/projected/3286fd15-7691-4745-a5f3-24c50205f1e0-kube-api-access-vvz72\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.606688 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.612750 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.613274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.633853 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvz72\" (UniqueName: \"kubernetes.io/projected/3286fd15-7691-4745-a5f3-24c50205f1e0-kube-api-access-vvz72\") pod \"nova-cell0-conductor-0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:43 crc kubenswrapper[4720]: I0202 09:17:43.674729 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:44 crc kubenswrapper[4720]: I0202 09:17:44.158429 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:44 crc kubenswrapper[4720]: I0202 09:17:44.256739 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3286fd15-7691-4745-a5f3-24c50205f1e0","Type":"ContainerStarted","Data":"2d6177100fa7421ab83566fbcfd1be64f0b1cd28f88cf53375c64fad019be395"} Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.277926 4720 generic.go:334] "Generic (PLEG): container finished" podID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerID="d2fb3f518e48d10d45f50ac2190207c3fb8e6367dad83ab2e57b596264e33139" exitCode=0 Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.278004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerDied","Data":"d2fb3f518e48d10d45f50ac2190207c3fb8e6367dad83ab2e57b596264e33139"} Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.280147 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3286fd15-7691-4745-a5f3-24c50205f1e0","Type":"ContainerStarted","Data":"db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b"} Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.281728 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.518829 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.543973 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.543953245 podStartE2EDuration="2.543953245s" podCreationTimestamp="2026-02-02 09:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:45.310414999 +0000 UTC m=+1299.166040575" watchObservedRunningTime="2026-02-02 09:17:45.543953245 +0000 UTC m=+1299.399578801" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.668393 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-run-httpd\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.668908 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-log-httpd\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.668912 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.669034 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-config-data\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.669138 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-combined-ca-bundle\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.669209 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fgz4\" (UniqueName: \"kubernetes.io/projected/43eea4a1-b18e-43d4-bfd4-ff40289947ba-kube-api-access-5fgz4\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.669267 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.669296 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-scripts\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.669340 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-sg-core-conf-yaml\") pod \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\" (UID: \"43eea4a1-b18e-43d4-bfd4-ff40289947ba\") " Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.670011 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.670043 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43eea4a1-b18e-43d4-bfd4-ff40289947ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.675219 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-scripts" (OuterVolumeSpecName: "scripts") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.684656 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43eea4a1-b18e-43d4-bfd4-ff40289947ba-kube-api-access-5fgz4" (OuterVolumeSpecName: "kube-api-access-5fgz4") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "kube-api-access-5fgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.728542 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.773127 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.773165 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.773178 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fgz4\" (UniqueName: \"kubernetes.io/projected/43eea4a1-b18e-43d4-bfd4-ff40289947ba-kube-api-access-5fgz4\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.777822 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.818969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-config-data" (OuterVolumeSpecName: "config-data") pod "43eea4a1-b18e-43d4-bfd4-ff40289947ba" (UID: "43eea4a1-b18e-43d4-bfd4-ff40289947ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.875425 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:45 crc kubenswrapper[4720]: I0202 09:17:45.875464 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eea4a1-b18e-43d4-bfd4-ff40289947ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.295382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43eea4a1-b18e-43d4-bfd4-ff40289947ba","Type":"ContainerDied","Data":"aee303ef5a0181c5fc313134d644e39487ad1ae158d45445fd12d71e4aa51e8e"} Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.295473 4720 scope.go:117] "RemoveContainer" containerID="9e2890c74b7b9bdef85fdf3db047d21989999208ef88db68cfd95870cfb4d79b" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.297098 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.323685 4720 scope.go:117] "RemoveContainer" containerID="a4a63292486e0be0be2723968d01a0901d478ff761d0a830e3da91238830f2aa" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.361220 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.371442 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.419340 4720 scope.go:117] "RemoveContainer" containerID="cc8719c3cfd929a53016e2de3d38a982aaa8ff821343fef2df615fa5c0b415cf" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.421079 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:46 crc kubenswrapper[4720]: E0202 09:17:46.421549 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-notification-agent" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.421567 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-notification-agent" Feb 02 09:17:46 crc kubenswrapper[4720]: E0202 09:17:46.421582 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="proxy-httpd" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.421590 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="proxy-httpd" Feb 02 09:17:46 crc kubenswrapper[4720]: E0202 09:17:46.421624 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="sg-core" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.421632 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="sg-core" Feb 02 09:17:46 crc kubenswrapper[4720]: E0202 09:17:46.421649 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-central-agent" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.421657 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-central-agent" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.421878 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="sg-core" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.422060 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-central-agent" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.422072 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="proxy-httpd" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.422085 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" containerName="ceilometer-notification-agent" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.424262 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.428558 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.428776 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.434481 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.481712 4720 scope.go:117] "RemoveContainer" containerID="d2fb3f518e48d10d45f50ac2190207c3fb8e6367dad83ab2e57b596264e33139" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.592830 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-log-httpd\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.592910 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-run-httpd\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.593129 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.593328 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-scripts\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.593511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.593608 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llt88\" (UniqueName: \"kubernetes.io/projected/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-kube-api-access-llt88\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.593669 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-config-data\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.695890 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-log-httpd\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696244 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-run-httpd\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696296 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696312 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-log-httpd\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696370 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-scripts\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llt88\" (UniqueName: \"kubernetes.io/projected/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-kube-api-access-llt88\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-config-data\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.696918 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-run-httpd\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.703669 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.706963 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-config-data\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.710604 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.717346 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-scripts\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.718411 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llt88\" (UniqueName: \"kubernetes.io/projected/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-kube-api-access-llt88\") pod \"ceilometer-0\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.759344 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:46 crc kubenswrapper[4720]: I0202 09:17:46.912586 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43eea4a1-b18e-43d4-bfd4-ff40289947ba" path="/var/lib/kubelet/pods/43eea4a1-b18e-43d4-bfd4-ff40289947ba/volumes" Feb 02 09:17:47 crc kubenswrapper[4720]: I0202 09:17:47.260512 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:47 crc kubenswrapper[4720]: I0202 09:17:47.307054 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerStarted","Data":"4dfe3fcce6e248680ff0015fa8e34e3452950c8e50ac224d240f3f5d5be7bcd6"} Feb 02 09:17:48 crc kubenswrapper[4720]: I0202 09:17:48.317103 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerStarted","Data":"d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7"} Feb 02 09:17:49 crc kubenswrapper[4720]: I0202 09:17:49.330339 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerStarted","Data":"d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647"} Feb 02 09:17:50 crc kubenswrapper[4720]: I0202 09:17:50.343421 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerStarted","Data":"39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9"} Feb 02 09:17:52 crc kubenswrapper[4720]: I0202 09:17:52.365472 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerStarted","Data":"e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f"} Feb 02 09:17:52 crc kubenswrapper[4720]: I0202 09:17:52.365996 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:17:52 crc kubenswrapper[4720]: I0202 09:17:52.389082 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.952887402 podStartE2EDuration="6.389065466s" podCreationTimestamp="2026-02-02 09:17:46 +0000 UTC" firstStartedPulling="2026-02-02 09:17:47.246370121 +0000 UTC m=+1301.101995677" lastFinishedPulling="2026-02-02 09:17:51.682548195 +0000 UTC m=+1305.538173741" observedRunningTime="2026-02-02 09:17:52.38796394 +0000 UTC m=+1306.243589496" watchObservedRunningTime="2026-02-02 09:17:52.389065466 +0000 UTC m=+1306.244691022" Feb 02 09:17:52 crc kubenswrapper[4720]: I0202 09:17:52.488542 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:52 crc kubenswrapper[4720]: I0202 09:17:52.488830 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" gracePeriod=30 Feb 02 09:17:52 crc kubenswrapper[4720]: E0202 09:17:52.493718 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 09:17:52 crc kubenswrapper[4720]: E0202 09:17:52.495392 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 09:17:52 crc kubenswrapper[4720]: E0202 09:17:52.496721 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 09:17:52 crc kubenswrapper[4720]: E0202 09:17:52.496764 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerName="nova-cell0-conductor-conductor" Feb 02 09:17:53 crc kubenswrapper[4720]: E0202 09:17:53.678243 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 09:17:53 crc kubenswrapper[4720]: E0202 09:17:53.680553 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 09:17:53 crc kubenswrapper[4720]: E0202 09:17:53.681772 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 09:17:53 crc kubenswrapper[4720]: E0202 09:17:53.681802 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerName="nova-cell0-conductor-conductor" Feb 02 09:17:54 crc kubenswrapper[4720]: I0202 09:17:54.178532 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:54 crc kubenswrapper[4720]: I0202 09:17:54.382802 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-central-agent" containerID="cri-o://d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7" gracePeriod=30 Feb 02 09:17:54 crc kubenswrapper[4720]: I0202 09:17:54.382845 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-notification-agent" containerID="cri-o://d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647" gracePeriod=30 Feb 02 09:17:54 crc kubenswrapper[4720]: I0202 09:17:54.382845 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="sg-core" containerID="cri-o://39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9" gracePeriod=30 Feb 02 09:17:54 crc kubenswrapper[4720]: I0202 09:17:54.382917 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="proxy-httpd" containerID="cri-o://e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f" gracePeriod=30 Feb 02 09:17:55 crc kubenswrapper[4720]: I0202 09:17:55.396784 4720 generic.go:334] "Generic (PLEG): container finished" podID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerID="e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f" exitCode=0 Feb 02 09:17:55 crc kubenswrapper[4720]: I0202 09:17:55.397237 4720 generic.go:334] "Generic (PLEG): container finished" podID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerID="39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9" exitCode=2 Feb 02 09:17:55 crc kubenswrapper[4720]: I0202 09:17:55.397255 4720 generic.go:334] "Generic (PLEG): container finished" podID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerID="d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647" exitCode=0 Feb 02 09:17:55 crc kubenswrapper[4720]: I0202 09:17:55.396850 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerDied","Data":"e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f"} Feb 02 09:17:55 crc kubenswrapper[4720]: I0202 09:17:55.397299 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerDied","Data":"39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9"} Feb 02 09:17:55 crc kubenswrapper[4720]: I0202 09:17:55.397322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerDied","Data":"d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647"} Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.041319 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.169315 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.245285 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-combined-ca-bundle\") pod \"3286fd15-7691-4745-a5f3-24c50205f1e0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.245361 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvz72\" (UniqueName: \"kubernetes.io/projected/3286fd15-7691-4745-a5f3-24c50205f1e0-kube-api-access-vvz72\") pod \"3286fd15-7691-4745-a5f3-24c50205f1e0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.245471 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-config-data\") pod \"3286fd15-7691-4745-a5f3-24c50205f1e0\" (UID: \"3286fd15-7691-4745-a5f3-24c50205f1e0\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.251440 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3286fd15-7691-4745-a5f3-24c50205f1e0-kube-api-access-vvz72" (OuterVolumeSpecName: "kube-api-access-vvz72") pod "3286fd15-7691-4745-a5f3-24c50205f1e0" (UID: "3286fd15-7691-4745-a5f3-24c50205f1e0"). InnerVolumeSpecName "kube-api-access-vvz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.274485 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3286fd15-7691-4745-a5f3-24c50205f1e0" (UID: "3286fd15-7691-4745-a5f3-24c50205f1e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.281524 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-config-data" (OuterVolumeSpecName: "config-data") pod "3286fd15-7691-4745-a5f3-24c50205f1e0" (UID: "3286fd15-7691-4745-a5f3-24c50205f1e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.348663 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-log-httpd\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.348764 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-config-data\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.348822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llt88\" (UniqueName: \"kubernetes.io/projected/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-kube-api-access-llt88\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.348851 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-scripts\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.348897 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-run-httpd\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.349051 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-sg-core-conf-yaml\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.349140 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-combined-ca-bundle\") pod \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\" (UID: \"06f050d8-d2f8-4a47-afeb-d609e5fb52fa\") " Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.349631 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.349643 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvz72\" (UniqueName: \"kubernetes.io/projected/3286fd15-7691-4745-a5f3-24c50205f1e0-kube-api-access-vvz72\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.349655 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3286fd15-7691-4745-a5f3-24c50205f1e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.351382 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.352739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-kube-api-access-llt88" (OuterVolumeSpecName: "kube-api-access-llt88") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "kube-api-access-llt88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.353121 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.354310 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-scripts" (OuterVolumeSpecName: "scripts") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.386135 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.416847 4720 generic.go:334] "Generic (PLEG): container finished" podID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerID="d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7" exitCode=0 Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.416967 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerDied","Data":"d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7"} Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.417009 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06f050d8-d2f8-4a47-afeb-d609e5fb52fa","Type":"ContainerDied","Data":"4dfe3fcce6e248680ff0015fa8e34e3452950c8e50ac224d240f3f5d5be7bcd6"} Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.417037 4720 scope.go:117] "RemoveContainer" containerID="e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.417217 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.420191 4720 generic.go:334] "Generic (PLEG): container finished" podID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" exitCode=0 Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.420236 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3286fd15-7691-4745-a5f3-24c50205f1e0","Type":"ContainerDied","Data":"db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b"} Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.420272 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3286fd15-7691-4745-a5f3-24c50205f1e0","Type":"ContainerDied","Data":"2d6177100fa7421ab83566fbcfd1be64f0b1cd28f88cf53375c64fad019be395"} Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.420324 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.438690 4720 scope.go:117] "RemoveContainer" containerID="39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.439229 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.452743 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.452784 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.452799 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.452810 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llt88\" (UniqueName: \"kubernetes.io/projected/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-kube-api-access-llt88\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.452824 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.452835 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.464012 4720 scope.go:117] "RemoveContainer" containerID="d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.466638 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.474082 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-config-data" (OuterVolumeSpecName: "config-data") pod "06f050d8-d2f8-4a47-afeb-d609e5fb52fa" (UID: "06f050d8-d2f8-4a47-afeb-d609e5fb52fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.502211 4720 scope.go:117] "RemoveContainer" containerID="d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.502529 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.532298 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.533100 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-central-agent" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533123 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-central-agent" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.533164 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="sg-core" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533173 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="sg-core" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.533221 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-notification-agent" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533231 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-notification-agent" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.533255 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="proxy-httpd" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533263 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="proxy-httpd" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.533281 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerName="nova-cell0-conductor-conductor" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533289 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerName="nova-cell0-conductor-conductor" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533694 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="proxy-httpd" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533706 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-central-agent" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533728 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" containerName="nova-cell0-conductor-conductor" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533764 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="ceilometer-notification-agent" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.533777 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" containerName="sg-core" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.535065 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.538818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.539033 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pkl7g" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.539223 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.543573 4720 scope.go:117] "RemoveContainer" containerID="e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.546400 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f\": container with ID starting with e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f not found: ID does not exist" containerID="e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.550137 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f"} err="failed to get container status \"e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f\": rpc error: code = NotFound desc = could not find container \"e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f\": container with ID starting with e664766485a401d3805ca74c1d1106906c0433ed7ade9112d87a138428059d1f not found: ID does not exist" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.550174 4720 scope.go:117] "RemoveContainer" containerID="39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.550737 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9\": container with ID starting with 39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9 not found: ID does not exist" containerID="39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.550782 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9"} err="failed to get container status \"39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9\": rpc error: code = NotFound desc = could not find container \"39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9\": container with ID starting with 39a7fe716752679b575334a7f802e476c08580002ec9161a8969c4c9f5b814c9 not found: ID does not exist" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.550807 4720 scope.go:117] "RemoveContainer" containerID="d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.551204 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647\": container with ID starting with d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647 not found: ID does not exist" containerID="d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.551243 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647"} err="failed to get container status \"d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647\": rpc error: code = NotFound desc = could not find container \"d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647\": container with ID starting with d6b850e53fc49569d6cf961d4ad87fd562674f38d24713fe2187cfe5fccd5647 not found: ID does not exist" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.551267 4720 scope.go:117] "RemoveContainer" containerID="d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.552395 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7\": container with ID starting with d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7 not found: ID does not exist" containerID="d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.552524 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7"} err="failed to get container status \"d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7\": rpc error: code = NotFound desc = could not find container \"d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7\": container with ID starting with d2f97e100e659f7655bd7fe418947ff68ec0e1f3a0d340545a29cfe64c2886f7 not found: ID does not exist" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.552624 4720 scope.go:117] "RemoveContainer" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.554335 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57q6\" (UniqueName: \"kubernetes.io/projected/cd79436b-a659-4d45-89cc-95f627093f00-kube-api-access-d57q6\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.554386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd79436b-a659-4d45-89cc-95f627093f00-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.554544 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd79436b-a659-4d45-89cc-95f627093f00-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.555096 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f050d8-d2f8-4a47-afeb-d609e5fb52fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.583788 4720 scope.go:117] "RemoveContainer" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" Feb 02 09:17:56 crc kubenswrapper[4720]: E0202 09:17:56.584136 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b\": container with ID starting with db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b not found: ID does not exist" containerID="db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.584168 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b"} err="failed to get container status \"db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b\": rpc error: code = NotFound desc = could not find container \"db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b\": container with ID starting with db197cf47058ad3a1bee79c939dc81bfea596ca33c63904ea64e0ba940391a0b not found: ID does not exist" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.657433 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd79436b-a659-4d45-89cc-95f627093f00-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.657576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d57q6\" (UniqueName: \"kubernetes.io/projected/cd79436b-a659-4d45-89cc-95f627093f00-kube-api-access-d57q6\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.657621 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd79436b-a659-4d45-89cc-95f627093f00-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.660509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd79436b-a659-4d45-89cc-95f627093f00-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.661147 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd79436b-a659-4d45-89cc-95f627093f00-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.678985 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d57q6\" (UniqueName: \"kubernetes.io/projected/cd79436b-a659-4d45-89cc-95f627093f00-kube-api-access-d57q6\") pod \"nova-cell0-conductor-0\" (UID: \"cd79436b-a659-4d45-89cc-95f627093f00\") " pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.809624 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.826472 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.848985 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.851229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.854989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.855313 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.862440 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.864376 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.900622 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f050d8-d2f8-4a47-afeb-d609e5fb52fa" path="/var/lib/kubelet/pods/06f050d8-d2f8-4a47-afeb-d609e5fb52fa/volumes" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.901387 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3286fd15-7691-4745-a5f3-24c50205f1e0" path="/var/lib/kubelet/pods/3286fd15-7691-4745-a5f3-24c50205f1e0/volumes" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.967487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.967743 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kf9j\" (UniqueName: \"kubernetes.io/projected/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-kube-api-access-9kf9j\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.967847 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-scripts\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.967953 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.968009 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-config-data\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.968039 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-run-httpd\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:56 crc kubenswrapper[4720]: I0202 09:17:56.968065 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-log-httpd\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070129 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-log-httpd\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kf9j\" (UniqueName: \"kubernetes.io/projected/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-kube-api-access-9kf9j\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070370 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-scripts\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070466 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-config-data\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.070505 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-run-httpd\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.071061 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-run-httpd\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.073075 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-log-httpd\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.077627 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.079453 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-config-data\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.079561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.082029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-scripts\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.095278 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kf9j\" (UniqueName: \"kubernetes.io/projected/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-kube-api-access-9kf9j\") pod \"ceilometer-0\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.143536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.166283 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.433451 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd79436b-a659-4d45-89cc-95f627093f00","Type":"ContainerStarted","Data":"a68e62191410ce61c9209be2edb616a226af3f6ca5ee8bcf34fab560cc4c5176"} Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.433504 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.433536 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd79436b-a659-4d45-89cc-95f627093f00","Type":"ContainerStarted","Data":"ccafd63c7e4ac8548d98a551c23915f96ecfae901c81e63ebd7e9f88fbc9a408"} Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.454848 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.454831867 podStartE2EDuration="1.454831867s" podCreationTimestamp="2026-02-02 09:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:17:57.447421432 +0000 UTC m=+1311.303046988" watchObservedRunningTime="2026-02-02 09:17:57.454831867 +0000 UTC m=+1311.310457423" Feb 02 09:17:57 crc kubenswrapper[4720]: I0202 09:17:57.693587 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:17:58 crc kubenswrapper[4720]: I0202 09:17:58.454172 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerStarted","Data":"80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9"} Feb 02 09:17:58 crc kubenswrapper[4720]: I0202 09:17:58.454565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerStarted","Data":"f672603b7c6c5ca06a0c61f4dfd761e607df0375303f8641b1f6c3cd23045e87"} Feb 02 09:17:59 crc kubenswrapper[4720]: I0202 09:17:59.465326 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerStarted","Data":"61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc"} Feb 02 09:18:00 crc kubenswrapper[4720]: I0202 09:18:00.476292 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerStarted","Data":"4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321"} Feb 02 09:18:02 crc kubenswrapper[4720]: I0202 09:18:02.503701 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerStarted","Data":"f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0"} Feb 02 09:18:02 crc kubenswrapper[4720]: I0202 09:18:02.504072 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:18:02 crc kubenswrapper[4720]: I0202 09:18:02.535670 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.93743859 podStartE2EDuration="6.535643426s" podCreationTimestamp="2026-02-02 09:17:56 +0000 UTC" firstStartedPulling="2026-02-02 09:17:57.701220139 +0000 UTC m=+1311.556845695" lastFinishedPulling="2026-02-02 09:18:01.299424935 +0000 UTC m=+1315.155050531" observedRunningTime="2026-02-02 09:18:02.527183715 +0000 UTC m=+1316.382809261" watchObservedRunningTime="2026-02-02 09:18:02.535643426 +0000 UTC m=+1316.391269002" Feb 02 09:18:06 crc kubenswrapper[4720]: I0202 09:18:06.914722 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.530956 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kjlpf"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.532914 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.535823 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.536180 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.547021 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kjlpf"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.670738 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-scripts\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.670789 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-config-data\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.670808 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.670834 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhbv\" (UniqueName: \"kubernetes.io/projected/699b60ee-c039-48cf-8aa4-da649552c691-kube-api-access-ghhbv\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.712670 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.714537 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.718162 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.730804 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.758838 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.762102 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.773784 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhbv\" (UniqueName: \"kubernetes.io/projected/699b60ee-c039-48cf-8aa4-da649552c691-kube-api-access-ghhbv\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-config-data\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774725 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-scripts\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774766 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-config-data\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774783 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774800 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfsp8\" (UniqueName: \"kubernetes.io/projected/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-kube-api-access-jfsp8\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.774818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.777294 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.783529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.784019 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-config-data\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.835264 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-scripts\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.842600 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhbv\" (UniqueName: \"kubernetes.io/projected/699b60ee-c039-48cf-8aa4-da649552c691-kube-api-access-ghhbv\") pod \"nova-cell0-cell-mapping-kjlpf\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.859694 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.871717 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.873715 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.878301 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.881269 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.881493 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e3052fe-f13b-48f5-b285-9cae81db85a9-logs\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.881630 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-config-data\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.881706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4xj\" (UniqueName: \"kubernetes.io/projected/9e3052fe-f13b-48f5-b285-9cae81db85a9-kube-api-access-kd4xj\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.881933 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.882051 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfsp8\" (UniqueName: \"kubernetes.io/projected/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-kube-api-access-jfsp8\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.882219 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-config-data\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.894784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-config-data\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.895456 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.914079 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.952337 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfsp8\" (UniqueName: \"kubernetes.io/projected/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-kube-api-access-jfsp8\") pod \"nova-scheduler-0\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z92v5\" (UniqueName: \"kubernetes.io/projected/e252883e-2ccb-439e-9b0d-cafa816f8ed6-kube-api-access-z92v5\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984619 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984660 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-config-data\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984703 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e252883e-2ccb-439e-9b0d-cafa816f8ed6-logs\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984727 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984781 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e3052fe-f13b-48f5-b285-9cae81db85a9-logs\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984809 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4xj\" (UniqueName: \"kubernetes.io/projected/9e3052fe-f13b-48f5-b285-9cae81db85a9-kube-api-access-kd4xj\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.984830 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-config-data\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.986507 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.987728 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.989789 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.990713 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e3052fe-f13b-48f5-b285-9cae81db85a9-logs\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:07 crc kubenswrapper[4720]: I0202 09:18:07.997631 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-config-data\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.009443 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.026178 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.033939 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.044775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4xj\" (UniqueName: \"kubernetes.io/projected/9e3052fe-f13b-48f5-b285-9cae81db85a9-kube-api-access-kd4xj\") pod \"nova-api-0\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " pod="openstack/nova-api-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.060046 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.090959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e252883e-2ccb-439e-9b0d-cafa816f8ed6-logs\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097315 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097436 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097491 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-config-data\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vh8s\" (UniqueName: \"kubernetes.io/projected/82d9de3f-df7e-4704-9faf-01d3120135fd-kube-api-access-4vh8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097607 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z92v5\" (UniqueName: \"kubernetes.io/projected/e252883e-2ccb-439e-9b0d-cafa816f8ed6-kube-api-access-z92v5\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097702 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.097200 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-czfj4"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.102391 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.104493 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.105411 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e252883e-2ccb-439e-9b0d-cafa816f8ed6-logs\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.106937 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-config-data\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.108219 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-czfj4"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.127796 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z92v5\" (UniqueName: \"kubernetes.io/projected/e252883e-2ccb-439e-9b0d-cafa816f8ed6-kube-api-access-z92v5\") pod \"nova-metadata-0\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199118 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199197 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199220 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-config\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199239 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199302 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vh8s\" (UniqueName: \"kubernetes.io/projected/82d9de3f-df7e-4704-9faf-01d3120135fd-kube-api-access-4vh8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199328 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199365 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-svc\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199411 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.199464 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr5vz\" (UniqueName: \"kubernetes.io/projected/f8861dbd-3f3f-4935-9a55-1cb24c812053-kube-api-access-zr5vz\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.205310 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.207124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.216943 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vh8s\" (UniqueName: \"kubernetes.io/projected/82d9de3f-df7e-4704-9faf-01d3120135fd-kube-api-access-4vh8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.301515 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-config\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.301550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.301609 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.301639 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-svc\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.301676 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.301720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr5vz\" (UniqueName: \"kubernetes.io/projected/f8861dbd-3f3f-4935-9a55-1cb24c812053-kube-api-access-zr5vz\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.302472 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-config\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.302827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-svc\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.302990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.302990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.306694 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.329373 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr5vz\" (UniqueName: \"kubernetes.io/projected/f8861dbd-3f3f-4935-9a55-1cb24c812053-kube-api-access-zr5vz\") pod \"dnsmasq-dns-7d5fbbb8c5-czfj4\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.397220 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.417401 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.444813 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.558193 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kjlpf"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.661219 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9hz57"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.662543 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.664986 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.665138 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 09:18:08 crc kubenswrapper[4720]: W0202 09:18:08.687588 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ddaa72_38e7_4bb8_9f22_fe7e747e12dc.slice/crio-fefe8a783bfa1d1aa73385cec00043593bcf2c2fefb25df4df77513363b0e532 WatchSource:0}: Error finding container fefe8a783bfa1d1aa73385cec00043593bcf2c2fefb25df4df77513363b0e532: Status 404 returned error can't find the container with id fefe8a783bfa1d1aa73385cec00043593bcf2c2fefb25df4df77513363b0e532 Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.691908 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9hz57"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.702265 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:08 crc kubenswrapper[4720]: W0202 09:18:08.705212 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e3052fe_f13b_48f5_b285_9cae81db85a9.slice/crio-92507e60c4f37b4ba4c6f857763b372418604fcb12b406df711344b8197c050c WatchSource:0}: Error finding container 92507e60c4f37b4ba4c6f857763b372418604fcb12b406df711344b8197c050c: Status 404 returned error can't find the container with id 92507e60c4f37b4ba4c6f857763b372418604fcb12b406df711344b8197c050c Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.721319 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.824652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7gn\" (UniqueName: \"kubernetes.io/projected/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-kube-api-access-mp7gn\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.825161 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-config-data\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.825198 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-scripts\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.825252 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: W0202 09:18:08.917396 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82d9de3f_df7e_4704_9faf_01d3120135fd.slice/crio-e440ba9f8087439b2e7b49a3c2b1e83620b04ae4e376d3cb302d860dfc8e9afe WatchSource:0}: Error finding container e440ba9f8087439b2e7b49a3c2b1e83620b04ae4e376d3cb302d860dfc8e9afe: Status 404 returned error can't find the container with id e440ba9f8087439b2e7b49a3c2b1e83620b04ae4e376d3cb302d860dfc8e9afe Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.917742 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:08 crc kubenswrapper[4720]: W0202 09:18:08.919131 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode252883e_2ccb_439e_9b0d_cafa816f8ed6.slice/crio-19844fa6cb90b210261685cfa1be27dd550fd6f3f2988062fefbc223c93b9243 WatchSource:0}: Error finding container 19844fa6cb90b210261685cfa1be27dd550fd6f3f2988062fefbc223c93b9243: Status 404 returned error can't find the container with id 19844fa6cb90b210261685cfa1be27dd550fd6f3f2988062fefbc223c93b9243 Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.926454 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-config-data\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.926487 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-scripts\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.926523 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.926568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7gn\" (UniqueName: \"kubernetes.io/projected/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-kube-api-access-mp7gn\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.933566 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.933595 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-config-data\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.940922 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-scripts\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.943832 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7gn\" (UniqueName: \"kubernetes.io/projected/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-kube-api-access-mp7gn\") pod \"nova-cell1-conductor-db-sync-9hz57\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.948762 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:08 crc kubenswrapper[4720]: I0202 09:18:08.993318 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.100012 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-czfj4"] Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.651906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82d9de3f-df7e-4704-9faf-01d3120135fd","Type":"ContainerStarted","Data":"e440ba9f8087439b2e7b49a3c2b1e83620b04ae4e376d3cb302d860dfc8e9afe"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.655300 4720 generic.go:334] "Generic (PLEG): container finished" podID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerID="479380aab0721bcb0e8b25bbf9080e4a6510f9ba6dd193bf5e2b8be5e4814b5b" exitCode=0 Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.655365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" event={"ID":"f8861dbd-3f3f-4935-9a55-1cb24c812053","Type":"ContainerDied","Data":"479380aab0721bcb0e8b25bbf9080e4a6510f9ba6dd193bf5e2b8be5e4814b5b"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.655389 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" event={"ID":"f8861dbd-3f3f-4935-9a55-1cb24c812053","Type":"ContainerStarted","Data":"be5cbdeab5c6b20e68eb9a1e8298cab9967096328eae2a673ddb6094c386d666"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.655824 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9hz57"] Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.667447 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kjlpf" event={"ID":"699b60ee-c039-48cf-8aa4-da649552c691","Type":"ContainerStarted","Data":"72706881b5cf595de4332106d140a68ee4816eae6f02406185e12f4bc571999e"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.667499 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kjlpf" event={"ID":"699b60ee-c039-48cf-8aa4-da649552c691","Type":"ContainerStarted","Data":"68f2c0cbf96cb12856c4e31f4e20ff1d044b03f4ea7c29130a9b920e1ef84ad5"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.677822 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e252883e-2ccb-439e-9b0d-cafa816f8ed6","Type":"ContainerStarted","Data":"19844fa6cb90b210261685cfa1be27dd550fd6f3f2988062fefbc223c93b9243"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.684926 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e3052fe-f13b-48f5-b285-9cae81db85a9","Type":"ContainerStarted","Data":"92507e60c4f37b4ba4c6f857763b372418604fcb12b406df711344b8197c050c"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.694240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc","Type":"ContainerStarted","Data":"fefe8a783bfa1d1aa73385cec00043593bcf2c2fefb25df4df77513363b0e532"} Feb 02 09:18:09 crc kubenswrapper[4720]: I0202 09:18:09.710686 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kjlpf" podStartSLOduration=2.71066335 podStartE2EDuration="2.71066335s" podCreationTimestamp="2026-02-02 09:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:09.697818884 +0000 UTC m=+1323.553444440" watchObservedRunningTime="2026-02-02 09:18:09.71066335 +0000 UTC m=+1323.566288906" Feb 02 09:18:10 crc kubenswrapper[4720]: I0202 09:18:10.705077 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9hz57" event={"ID":"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5","Type":"ContainerStarted","Data":"00ffd2ae4abda204da28629bd4157dcb779f63e798180503776e18c9637d9287"} Feb 02 09:18:10 crc kubenswrapper[4720]: I0202 09:18:10.705734 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9hz57" event={"ID":"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5","Type":"ContainerStarted","Data":"4dbe6e77a882e7c47d27218da7cf65293c1dac7c697abe4f9af857150a5972aa"} Feb 02 09:18:10 crc kubenswrapper[4720]: I0202 09:18:10.711451 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" event={"ID":"f8861dbd-3f3f-4935-9a55-1cb24c812053","Type":"ContainerStarted","Data":"cdd96a32cabb3eaf367c4ef94ca3c021e8fe916a94cde0f6e98a1fb3ef1f805f"} Feb 02 09:18:10 crc kubenswrapper[4720]: I0202 09:18:10.711506 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:10 crc kubenswrapper[4720]: I0202 09:18:10.735167 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9hz57" podStartSLOduration=2.73514963 podStartE2EDuration="2.73514963s" podCreationTimestamp="2026-02-02 09:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:10.725915112 +0000 UTC m=+1324.581540668" watchObservedRunningTime="2026-02-02 09:18:10.73514963 +0000 UTC m=+1324.590775186" Feb 02 09:18:11 crc kubenswrapper[4720]: I0202 09:18:11.316741 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" podStartSLOduration=3.31672493 podStartE2EDuration="3.31672493s" podCreationTimestamp="2026-02-02 09:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:10.750186747 +0000 UTC m=+1324.605812303" watchObservedRunningTime="2026-02-02 09:18:11.31672493 +0000 UTC m=+1325.172350486" Feb 02 09:18:11 crc kubenswrapper[4720]: I0202 09:18:11.324056 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:11 crc kubenswrapper[4720]: I0202 09:18:11.350284 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.732986 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82d9de3f-df7e-4704-9faf-01d3120135fd","Type":"ContainerStarted","Data":"4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093"} Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.733405 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="82d9de3f-df7e-4704-9faf-01d3120135fd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093" gracePeriod=30 Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.741751 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e252883e-2ccb-439e-9b0d-cafa816f8ed6","Type":"ContainerStarted","Data":"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869"} Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.741799 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e252883e-2ccb-439e-9b0d-cafa816f8ed6","Type":"ContainerStarted","Data":"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55"} Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.741809 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-log" containerID="cri-o://4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55" gracePeriod=30 Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.741832 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-metadata" containerID="cri-o://46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869" gracePeriod=30 Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.744489 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e3052fe-f13b-48f5-b285-9cae81db85a9","Type":"ContainerStarted","Data":"04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f"} Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.744535 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e3052fe-f13b-48f5-b285-9cae81db85a9","Type":"ContainerStarted","Data":"091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0"} Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.756736 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.112385444 podStartE2EDuration="5.756709602s" podCreationTimestamp="2026-02-02 09:18:07 +0000 UTC" firstStartedPulling="2026-02-02 09:18:08.920796021 +0000 UTC m=+1322.776421577" lastFinishedPulling="2026-02-02 09:18:11.565120179 +0000 UTC m=+1325.420745735" observedRunningTime="2026-02-02 09:18:12.75113373 +0000 UTC m=+1326.606759296" watchObservedRunningTime="2026-02-02 09:18:12.756709602 +0000 UTC m=+1326.612335158" Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.767120 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc","Type":"ContainerStarted","Data":"3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81"} Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.814292 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.130972196 podStartE2EDuration="5.814275397s" podCreationTimestamp="2026-02-02 09:18:07 +0000 UTC" firstStartedPulling="2026-02-02 09:18:08.921405106 +0000 UTC m=+1322.777030662" lastFinishedPulling="2026-02-02 09:18:11.604708307 +0000 UTC m=+1325.460333863" observedRunningTime="2026-02-02 09:18:12.786661723 +0000 UTC m=+1326.642287289" watchObservedRunningTime="2026-02-02 09:18:12.814275397 +0000 UTC m=+1326.669900953" Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.834128 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.990568635 podStartE2EDuration="5.834110687s" podCreationTimestamp="2026-02-02 09:18:07 +0000 UTC" firstStartedPulling="2026-02-02 09:18:08.693308427 +0000 UTC m=+1322.548933983" lastFinishedPulling="2026-02-02 09:18:11.536850479 +0000 UTC m=+1325.392476035" observedRunningTime="2026-02-02 09:18:12.828651549 +0000 UTC m=+1326.684277115" watchObservedRunningTime="2026-02-02 09:18:12.834110687 +0000 UTC m=+1326.689736243" Feb 02 09:18:12 crc kubenswrapper[4720]: I0202 09:18:12.837949 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.93884989 podStartE2EDuration="5.837941308s" podCreationTimestamp="2026-02-02 09:18:07 +0000 UTC" firstStartedPulling="2026-02-02 09:18:08.706659064 +0000 UTC m=+1322.562284620" lastFinishedPulling="2026-02-02 09:18:11.605750482 +0000 UTC m=+1325.461376038" observedRunningTime="2026-02-02 09:18:12.812086146 +0000 UTC m=+1326.667711722" watchObservedRunningTime="2026-02-02 09:18:12.837941308 +0000 UTC m=+1326.693566864" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.010686 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.391280 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.417969 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.524926 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z92v5\" (UniqueName: \"kubernetes.io/projected/e252883e-2ccb-439e-9b0d-cafa816f8ed6-kube-api-access-z92v5\") pod \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.525041 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-combined-ca-bundle\") pod \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.525103 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-config-data\") pod \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.525163 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e252883e-2ccb-439e-9b0d-cafa816f8ed6-logs\") pod \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\" (UID: \"e252883e-2ccb-439e-9b0d-cafa816f8ed6\") " Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.525468 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e252883e-2ccb-439e-9b0d-cafa816f8ed6-logs" (OuterVolumeSpecName: "logs") pod "e252883e-2ccb-439e-9b0d-cafa816f8ed6" (UID: "e252883e-2ccb-439e-9b0d-cafa816f8ed6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.526020 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e252883e-2ccb-439e-9b0d-cafa816f8ed6-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.535577 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e252883e-2ccb-439e-9b0d-cafa816f8ed6-kube-api-access-z92v5" (OuterVolumeSpecName: "kube-api-access-z92v5") pod "e252883e-2ccb-439e-9b0d-cafa816f8ed6" (UID: "e252883e-2ccb-439e-9b0d-cafa816f8ed6"). InnerVolumeSpecName "kube-api-access-z92v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.566027 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e252883e-2ccb-439e-9b0d-cafa816f8ed6" (UID: "e252883e-2ccb-439e-9b0d-cafa816f8ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.584249 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-config-data" (OuterVolumeSpecName: "config-data") pod "e252883e-2ccb-439e-9b0d-cafa816f8ed6" (UID: "e252883e-2ccb-439e-9b0d-cafa816f8ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.628630 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z92v5\" (UniqueName: \"kubernetes.io/projected/e252883e-2ccb-439e-9b0d-cafa816f8ed6-kube-api-access-z92v5\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.628676 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.628690 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e252883e-2ccb-439e-9b0d-cafa816f8ed6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.790966 4720 generic.go:334] "Generic (PLEG): container finished" podID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerID="46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869" exitCode=0 Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.791004 4720 generic.go:334] "Generic (PLEG): container finished" podID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerID="4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55" exitCode=143 Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.791133 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e252883e-2ccb-439e-9b0d-cafa816f8ed6","Type":"ContainerDied","Data":"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869"} Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.791238 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e252883e-2ccb-439e-9b0d-cafa816f8ed6","Type":"ContainerDied","Data":"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55"} Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.791262 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e252883e-2ccb-439e-9b0d-cafa816f8ed6","Type":"ContainerDied","Data":"19844fa6cb90b210261685cfa1be27dd550fd6f3f2988062fefbc223c93b9243"} Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.791346 4720 scope.go:117] "RemoveContainer" containerID="46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.791650 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.825938 4720 scope.go:117] "RemoveContainer" containerID="4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.843022 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.854312 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.874402 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:13 crc kubenswrapper[4720]: E0202 09:18:13.874817 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-metadata" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.874836 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-metadata" Feb 02 09:18:13 crc kubenswrapper[4720]: E0202 09:18:13.874873 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-log" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.874895 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-log" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.875071 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-log" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.875089 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" containerName="nova-metadata-metadata" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.876051 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.876925 4720 scope.go:117] "RemoveContainer" containerID="46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.879107 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.879344 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.881612 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:13 crc kubenswrapper[4720]: E0202 09:18:13.918355 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869\": container with ID starting with 46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869 not found: ID does not exist" containerID="46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.918418 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869"} err="failed to get container status \"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869\": rpc error: code = NotFound desc = could not find container \"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869\": container with ID starting with 46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869 not found: ID does not exist" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.918450 4720 scope.go:117] "RemoveContainer" containerID="4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55" Feb 02 09:18:13 crc kubenswrapper[4720]: E0202 09:18:13.920713 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55\": container with ID starting with 4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55 not found: ID does not exist" containerID="4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.920781 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55"} err="failed to get container status \"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55\": rpc error: code = NotFound desc = could not find container \"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55\": container with ID starting with 4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55 not found: ID does not exist" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.920815 4720 scope.go:117] "RemoveContainer" containerID="46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.921248 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869"} err="failed to get container status \"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869\": rpc error: code = NotFound desc = could not find container \"46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869\": container with ID starting with 46545205d6a3e62561f4390c22250dbe469298300fa1951b990624b016f6f869 not found: ID does not exist" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.921278 4720 scope.go:117] "RemoveContainer" containerID="4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.922564 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55"} err="failed to get container status \"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55\": rpc error: code = NotFound desc = could not find container \"4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55\": container with ID starting with 4af916831c4e413d6f785edfe770a71e1958061644c7b855a13730d23a842f55 not found: ID does not exist" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.934630 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e67afdb-544c-488f-81c3-43e1931adef9-logs\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.934832 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.935130 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.935235 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6mq\" (UniqueName: \"kubernetes.io/projected/1e67afdb-544c-488f-81c3-43e1931adef9-kube-api-access-6m6mq\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:13 crc kubenswrapper[4720]: I0202 09:18:13.935278 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-config-data\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.036597 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-config-data\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.037181 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e67afdb-544c-488f-81c3-43e1931adef9-logs\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.037311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.037393 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.037482 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6mq\" (UniqueName: \"kubernetes.io/projected/1e67afdb-544c-488f-81c3-43e1931adef9-kube-api-access-6m6mq\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.037487 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e67afdb-544c-488f-81c3-43e1931adef9-logs\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.040521 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-config-data\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.040710 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.041525 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.081498 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6mq\" (UniqueName: \"kubernetes.io/projected/1e67afdb-544c-488f-81c3-43e1931adef9-kube-api-access-6m6mq\") pod \"nova-metadata-0\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.244537 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.707698 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.815487 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e67afdb-544c-488f-81c3-43e1931adef9","Type":"ContainerStarted","Data":"185b2e004f0f088db6342c5197d0d255654554fa737dda9e957f77b72c1794f6"} Feb 02 09:18:14 crc kubenswrapper[4720]: I0202 09:18:14.899832 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e252883e-2ccb-439e-9b0d-cafa816f8ed6" path="/var/lib/kubelet/pods/e252883e-2ccb-439e-9b0d-cafa816f8ed6/volumes" Feb 02 09:18:15 crc kubenswrapper[4720]: I0202 09:18:15.824173 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e67afdb-544c-488f-81c3-43e1931adef9","Type":"ContainerStarted","Data":"0af1b9479539086cb94324d297b5c683b9a5d63cda81ae55ad754969030ef4d2"} Feb 02 09:18:15 crc kubenswrapper[4720]: I0202 09:18:15.824756 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e67afdb-544c-488f-81c3-43e1931adef9","Type":"ContainerStarted","Data":"d1bcedfb660ee9ddf9cc8160561f7473fc5261047292c34c066d12817f6a8529"} Feb 02 09:18:15 crc kubenswrapper[4720]: I0202 09:18:15.851165 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.851145943 podStartE2EDuration="2.851145943s" podCreationTimestamp="2026-02-02 09:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:15.84808336 +0000 UTC m=+1329.703708916" watchObservedRunningTime="2026-02-02 09:18:15.851145943 +0000 UTC m=+1329.706771499" Feb 02 09:18:17 crc kubenswrapper[4720]: I0202 09:18:17.845411 4720 generic.go:334] "Generic (PLEG): container finished" podID="6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" containerID="00ffd2ae4abda204da28629bd4157dcb779f63e798180503776e18c9637d9287" exitCode=0 Feb 02 09:18:17 crc kubenswrapper[4720]: I0202 09:18:17.846239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9hz57" event={"ID":"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5","Type":"ContainerDied","Data":"00ffd2ae4abda204da28629bd4157dcb779f63e798180503776e18c9637d9287"} Feb 02 09:18:17 crc kubenswrapper[4720]: I0202 09:18:17.849035 4720 generic.go:334] "Generic (PLEG): container finished" podID="699b60ee-c039-48cf-8aa4-da649552c691" containerID="72706881b5cf595de4332106d140a68ee4816eae6f02406185e12f4bc571999e" exitCode=0 Feb 02 09:18:17 crc kubenswrapper[4720]: I0202 09:18:17.849087 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kjlpf" event={"ID":"699b60ee-c039-48cf-8aa4-da649552c691","Type":"ContainerDied","Data":"72706881b5cf595de4332106d140a68ee4816eae6f02406185e12f4bc571999e"} Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.010808 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.043616 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.064301 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.064381 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.447133 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.550828 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-gdtzg"] Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.551143 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="dnsmasq-dns" containerID="cri-o://a91821d243d26bcef6530da3764658d30def941e5932acf2b61b755a1ec70b32" gracePeriod=10 Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.749240 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.861968 4720 generic.go:334] "Generic (PLEG): container finished" podID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerID="a91821d243d26bcef6530da3764658d30def941e5932acf2b61b755a1ec70b32" exitCode=0 Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.862015 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" event={"ID":"b9363a36-d6cb-4d9d-b11e-bc62166728bd","Type":"ContainerDied","Data":"a91821d243d26bcef6530da3764658d30def941e5932acf2b61b755a1ec70b32"} Feb 02 09:18:18 crc kubenswrapper[4720]: I0202 09:18:18.927522 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.158092 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.158580 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.202049 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.245827 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.245894 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.246544 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frsb\" (UniqueName: \"kubernetes.io/projected/b9363a36-d6cb-4d9d-b11e-bc62166728bd-kube-api-access-9frsb\") pod \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.246662 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-config\") pod \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.246724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-swift-storage-0\") pod \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.246760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-nb\") pod \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.246822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-sb\") pod \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.246855 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-svc\") pod \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\" (UID: \"b9363a36-d6cb-4d9d-b11e-bc62166728bd\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.262400 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9363a36-d6cb-4d9d-b11e-bc62166728bd-kube-api-access-9frsb" (OuterVolumeSpecName: "kube-api-access-9frsb") pod "b9363a36-d6cb-4d9d-b11e-bc62166728bd" (UID: "b9363a36-d6cb-4d9d-b11e-bc62166728bd"). InnerVolumeSpecName "kube-api-access-9frsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.333512 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-config" (OuterVolumeSpecName: "config") pod "b9363a36-d6cb-4d9d-b11e-bc62166728bd" (UID: "b9363a36-d6cb-4d9d-b11e-bc62166728bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.350050 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frsb\" (UniqueName: \"kubernetes.io/projected/b9363a36-d6cb-4d9d-b11e-bc62166728bd-kube-api-access-9frsb\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.350076 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.365999 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9363a36-d6cb-4d9d-b11e-bc62166728bd" (UID: "b9363a36-d6cb-4d9d-b11e-bc62166728bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.367971 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b9363a36-d6cb-4d9d-b11e-bc62166728bd" (UID: "b9363a36-d6cb-4d9d-b11e-bc62166728bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.373793 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b9363a36-d6cb-4d9d-b11e-bc62166728bd" (UID: "b9363a36-d6cb-4d9d-b11e-bc62166728bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.388519 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b9363a36-d6cb-4d9d-b11e-bc62166728bd" (UID: "b9363a36-d6cb-4d9d-b11e-bc62166728bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.448789 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452047 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghhbv\" (UniqueName: \"kubernetes.io/projected/699b60ee-c039-48cf-8aa4-da649552c691-kube-api-access-ghhbv\") pod \"699b60ee-c039-48cf-8aa4-da649552c691\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452259 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-scripts\") pod \"699b60ee-c039-48cf-8aa4-da649552c691\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452299 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-combined-ca-bundle\") pod \"699b60ee-c039-48cf-8aa4-da649552c691\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452340 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-config-data\") pod \"699b60ee-c039-48cf-8aa4-da649552c691\" (UID: \"699b60ee-c039-48cf-8aa4-da649552c691\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452811 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452833 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452849 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.452865 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9363a36-d6cb-4d9d-b11e-bc62166728bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.455334 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699b60ee-c039-48cf-8aa4-da649552c691-kube-api-access-ghhbv" (OuterVolumeSpecName: "kube-api-access-ghhbv") pod "699b60ee-c039-48cf-8aa4-da649552c691" (UID: "699b60ee-c039-48cf-8aa4-da649552c691"). InnerVolumeSpecName "kube-api-access-ghhbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.457976 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-scripts" (OuterVolumeSpecName: "scripts") pod "699b60ee-c039-48cf-8aa4-da649552c691" (UID: "699b60ee-c039-48cf-8aa4-da649552c691"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.495127 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "699b60ee-c039-48cf-8aa4-da649552c691" (UID: "699b60ee-c039-48cf-8aa4-da649552c691"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.544540 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-config-data" (OuterVolumeSpecName: "config-data") pod "699b60ee-c039-48cf-8aa4-da649552c691" (UID: "699b60ee-c039-48cf-8aa4-da649552c691"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.546846 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.553605 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-combined-ca-bundle\") pod \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.553710 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-scripts\") pod \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.553759 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp7gn\" (UniqueName: \"kubernetes.io/projected/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-kube-api-access-mp7gn\") pod \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.553808 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-config-data\") pod \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\" (UID: \"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5\") " Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.554156 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.554175 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.554186 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/699b60ee-c039-48cf-8aa4-da649552c691-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.554194 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghhbv\" (UniqueName: \"kubernetes.io/projected/699b60ee-c039-48cf-8aa4-da649552c691-kube-api-access-ghhbv\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.558686 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-kube-api-access-mp7gn" (OuterVolumeSpecName: "kube-api-access-mp7gn") pod "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" (UID: "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5"). InnerVolumeSpecName "kube-api-access-mp7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.558972 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-scripts" (OuterVolumeSpecName: "scripts") pod "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" (UID: "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.584060 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" (UID: "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.593484 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-config-data" (OuterVolumeSpecName: "config-data") pod "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" (UID: "6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.656491 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.656530 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp7gn\" (UniqueName: \"kubernetes.io/projected/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-kube-api-access-mp7gn\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.656542 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.656552 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.873104 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" event={"ID":"b9363a36-d6cb-4d9d-b11e-bc62166728bd","Type":"ContainerDied","Data":"2284ded7e66fcb4a5623e1c75e5313e5e8c12297d477ebaa6c4ba426ad838199"} Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.873156 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5865f9d689-gdtzg" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.873163 4720 scope.go:117] "RemoveContainer" containerID="a91821d243d26bcef6530da3764658d30def941e5932acf2b61b755a1ec70b32" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.875802 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kjlpf" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.875799 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kjlpf" event={"ID":"699b60ee-c039-48cf-8aa4-da649552c691","Type":"ContainerDied","Data":"68f2c0cbf96cb12856c4e31f4e20ff1d044b03f4ea7c29130a9b920e1ef84ad5"} Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.875915 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f2c0cbf96cb12856c4e31f4e20ff1d044b03f4ea7c29130a9b920e1ef84ad5" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.878356 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9hz57" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.879762 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9hz57" event={"ID":"6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5","Type":"ContainerDied","Data":"4dbe6e77a882e7c47d27218da7cf65293c1dac7c697abe4f9af857150a5972aa"} Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.879800 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dbe6e77a882e7c47d27218da7cf65293c1dac7c697abe4f9af857150a5972aa" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.900927 4720 scope.go:117] "RemoveContainer" containerID="2efa60fb9a2225e9a40b531989b9c69a707b3e07606d1aff1fa9564501db38fb" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.971656 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 09:18:19 crc kubenswrapper[4720]: E0202 09:18:19.971977 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="init" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.971993 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="init" Feb 02 09:18:19 crc kubenswrapper[4720]: E0202 09:18:19.972004 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="dnsmasq-dns" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.972010 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="dnsmasq-dns" Feb 02 09:18:19 crc kubenswrapper[4720]: E0202 09:18:19.972086 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" containerName="nova-cell1-conductor-db-sync" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.972095 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" containerName="nova-cell1-conductor-db-sync" Feb 02 09:18:19 crc kubenswrapper[4720]: E0202 09:18:19.972109 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699b60ee-c039-48cf-8aa4-da649552c691" containerName="nova-manage" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.972115 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="699b60ee-c039-48cf-8aa4-da649552c691" containerName="nova-manage" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.972301 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="699b60ee-c039-48cf-8aa4-da649552c691" containerName="nova-manage" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.972330 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" containerName="dnsmasq-dns" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.972346 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" containerName="nova-cell1-conductor-db-sync" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.986831 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.988831 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.989014 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 09:18:19 crc kubenswrapper[4720]: I0202 09:18:19.996948 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-gdtzg"] Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.008806 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5865f9d689-gdtzg"] Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.076369 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.076666 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-log" containerID="cri-o://091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0" gracePeriod=30 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.076739 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-api" containerID="cri-o://04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f" gracePeriod=30 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.100782 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.108932 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.109147 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-log" containerID="cri-o://d1bcedfb660ee9ddf9cc8160561f7473fc5261047292c34c066d12817f6a8529" gracePeriod=30 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.109289 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-metadata" containerID="cri-o://0af1b9479539086cb94324d297b5c683b9a5d63cda81ae55ad754969030ef4d2" gracePeriod=30 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.164487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rs7j\" (UniqueName: \"kubernetes.io/projected/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-kube-api-access-2rs7j\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.164547 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.164604 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.266719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rs7j\" (UniqueName: \"kubernetes.io/projected/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-kube-api-access-2rs7j\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.266773 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.266802 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.274834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.276500 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.291653 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rs7j\" (UniqueName: \"kubernetes.io/projected/c16b1831-a551-4fa5-ba76-5bf2c7bd2782-kube-api-access-2rs7j\") pod \"nova-cell1-conductor-0\" (UID: \"c16b1831-a551-4fa5-ba76-5bf2c7bd2782\") " pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.306474 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.768411 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.886998 4720 generic.go:334] "Generic (PLEG): container finished" podID="1e67afdb-544c-488f-81c3-43e1931adef9" containerID="0af1b9479539086cb94324d297b5c683b9a5d63cda81ae55ad754969030ef4d2" exitCode=0 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.887273 4720 generic.go:334] "Generic (PLEG): container finished" podID="1e67afdb-544c-488f-81c3-43e1931adef9" containerID="d1bcedfb660ee9ddf9cc8160561f7473fc5261047292c34c066d12817f6a8529" exitCode=143 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.888045 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e67afdb-544c-488f-81c3-43e1931adef9","Type":"ContainerDied","Data":"0af1b9479539086cb94324d297b5c683b9a5d63cda81ae55ad754969030ef4d2"} Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.888147 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e67afdb-544c-488f-81c3-43e1931adef9","Type":"ContainerDied","Data":"d1bcedfb660ee9ddf9cc8160561f7473fc5261047292c34c066d12817f6a8529"} Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.906984 4720 generic.go:334] "Generic (PLEG): container finished" podID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerID="091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0" exitCode=143 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.912186 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" containerName="nova-scheduler-scheduler" containerID="cri-o://3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81" gracePeriod=30 Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.914132 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9363a36-d6cb-4d9d-b11e-bc62166728bd" path="/var/lib/kubelet/pods/b9363a36-d6cb-4d9d-b11e-bc62166728bd/volumes" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.914915 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e67afdb-544c-488f-81c3-43e1931adef9","Type":"ContainerDied","Data":"185b2e004f0f088db6342c5197d0d255654554fa737dda9e957f77b72c1794f6"} Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.914938 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="185b2e004f0f088db6342c5197d0d255654554fa737dda9e957f77b72c1794f6" Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.914949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e3052fe-f13b-48f5-b285-9cae81db85a9","Type":"ContainerDied","Data":"091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0"} Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.914961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c16b1831-a551-4fa5-ba76-5bf2c7bd2782","Type":"ContainerStarted","Data":"49a872238ed3f5acba36e75d8d50e4312407e9fcd9a17a742b46b21275f5e783"} Feb 02 09:18:20 crc kubenswrapper[4720]: I0202 09:18:20.923733 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.085096 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-nova-metadata-tls-certs\") pod \"1e67afdb-544c-488f-81c3-43e1931adef9\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.085151 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-combined-ca-bundle\") pod \"1e67afdb-544c-488f-81c3-43e1931adef9\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.085204 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-config-data\") pod \"1e67afdb-544c-488f-81c3-43e1931adef9\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.085240 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e67afdb-544c-488f-81c3-43e1931adef9-logs\") pod \"1e67afdb-544c-488f-81c3-43e1931adef9\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.085294 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m6mq\" (UniqueName: \"kubernetes.io/projected/1e67afdb-544c-488f-81c3-43e1931adef9-kube-api-access-6m6mq\") pod \"1e67afdb-544c-488f-81c3-43e1931adef9\" (UID: \"1e67afdb-544c-488f-81c3-43e1931adef9\") " Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.088990 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e67afdb-544c-488f-81c3-43e1931adef9-logs" (OuterVolumeSpecName: "logs") pod "1e67afdb-544c-488f-81c3-43e1931adef9" (UID: "1e67afdb-544c-488f-81c3-43e1931adef9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.105073 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e67afdb-544c-488f-81c3-43e1931adef9-kube-api-access-6m6mq" (OuterVolumeSpecName: "kube-api-access-6m6mq") pod "1e67afdb-544c-488f-81c3-43e1931adef9" (UID: "1e67afdb-544c-488f-81c3-43e1931adef9"). InnerVolumeSpecName "kube-api-access-6m6mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.115639 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e67afdb-544c-488f-81c3-43e1931adef9" (UID: "1e67afdb-544c-488f-81c3-43e1931adef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.116964 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-config-data" (OuterVolumeSpecName: "config-data") pod "1e67afdb-544c-488f-81c3-43e1931adef9" (UID: "1e67afdb-544c-488f-81c3-43e1931adef9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.136181 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1e67afdb-544c-488f-81c3-43e1931adef9" (UID: "1e67afdb-544c-488f-81c3-43e1931adef9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.187256 4720 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.187288 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.187303 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e67afdb-544c-488f-81c3-43e1931adef9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.187314 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e67afdb-544c-488f-81c3-43e1931adef9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.187326 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m6mq\" (UniqueName: \"kubernetes.io/projected/1e67afdb-544c-488f-81c3-43e1931adef9-kube-api-access-6m6mq\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.926760 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.926772 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c16b1831-a551-4fa5-ba76-5bf2c7bd2782","Type":"ContainerStarted","Data":"2c5a5da1dafde01df382be21199f2507666d5815301441e9d2b3dd567b91260c"} Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.927321 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.969357 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9693372179999997 podStartE2EDuration="2.969337218s" podCreationTimestamp="2026-02-02 09:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:21.96311662 +0000 UTC m=+1335.818742216" watchObservedRunningTime="2026-02-02 09:18:21.969337218 +0000 UTC m=+1335.824962784" Feb 02 09:18:21 crc kubenswrapper[4720]: I0202 09:18:21.992827 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.001229 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.025421 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:22 crc kubenswrapper[4720]: E0202 09:18:22.025907 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-log" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.025929 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-log" Feb 02 09:18:22 crc kubenswrapper[4720]: E0202 09:18:22.025952 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-metadata" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.025961 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-metadata" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.026175 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-log" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.026199 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" containerName="nova-metadata-metadata" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.027365 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.030844 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.031284 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.073530 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.214930 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.215281 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92598ee6-7272-4e72-9616-60308a02970a-logs\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.215367 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.215407 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-config-data\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.215548 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdxk2\" (UniqueName: \"kubernetes.io/projected/92598ee6-7272-4e72-9616-60308a02970a-kube-api-access-tdxk2\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.316654 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.316703 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-config-data\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.316824 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdxk2\" (UniqueName: \"kubernetes.io/projected/92598ee6-7272-4e72-9616-60308a02970a-kube-api-access-tdxk2\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.316851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.316893 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92598ee6-7272-4e72-9616-60308a02970a-logs\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.317341 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92598ee6-7272-4e72-9616-60308a02970a-logs\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.323129 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.332481 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.341337 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-config-data\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.345067 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdxk2\" (UniqueName: \"kubernetes.io/projected/92598ee6-7272-4e72-9616-60308a02970a-kube-api-access-tdxk2\") pod \"nova-metadata-0\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.374339 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.830077 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:18:22 crc kubenswrapper[4720]: W0202 09:18:22.831891 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92598ee6_7272_4e72_9616_60308a02970a.slice/crio-97a61fc0bebd995ad9d8929ea322f4763e0d394d61fcc7226f8a9b6e1679a1f0 WatchSource:0}: Error finding container 97a61fc0bebd995ad9d8929ea322f4763e0d394d61fcc7226f8a9b6e1679a1f0: Status 404 returned error can't find the container with id 97a61fc0bebd995ad9d8929ea322f4763e0d394d61fcc7226f8a9b6e1679a1f0 Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.915494 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e67afdb-544c-488f-81c3-43e1931adef9" path="/var/lib/kubelet/pods/1e67afdb-544c-488f-81c3-43e1931adef9/volumes" Feb 02 09:18:22 crc kubenswrapper[4720]: I0202 09:18:22.944586 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92598ee6-7272-4e72-9616-60308a02970a","Type":"ContainerStarted","Data":"97a61fc0bebd995ad9d8929ea322f4763e0d394d61fcc7226f8a9b6e1679a1f0"} Feb 02 09:18:23 crc kubenswrapper[4720]: E0202 09:18:23.013301 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 09:18:23 crc kubenswrapper[4720]: E0202 09:18:23.017218 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 09:18:23 crc kubenswrapper[4720]: E0202 09:18:23.019088 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 09:18:23 crc kubenswrapper[4720]: E0202 09:18:23.019121 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" containerName="nova-scheduler-scheduler" Feb 02 09:18:23 crc kubenswrapper[4720]: I0202 09:18:23.954109 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92598ee6-7272-4e72-9616-60308a02970a","Type":"ContainerStarted","Data":"6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd"} Feb 02 09:18:23 crc kubenswrapper[4720]: I0202 09:18:23.954325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92598ee6-7272-4e72-9616-60308a02970a","Type":"ContainerStarted","Data":"b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c"} Feb 02 09:18:23 crc kubenswrapper[4720]: I0202 09:18:23.956306 4720 generic.go:334] "Generic (PLEG): container finished" podID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" containerID="3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81" exitCode=0 Feb 02 09:18:23 crc kubenswrapper[4720]: I0202 09:18:23.956342 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc","Type":"ContainerDied","Data":"3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81"} Feb 02 09:18:23 crc kubenswrapper[4720]: I0202 09:18:23.985000 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.98497936 podStartE2EDuration="2.98497936s" podCreationTimestamp="2026-02-02 09:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:23.978786393 +0000 UTC m=+1337.834411949" watchObservedRunningTime="2026-02-02 09:18:23.98497936 +0000 UTC m=+1337.840604916" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.108546 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.161181 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfsp8\" (UniqueName: \"kubernetes.io/projected/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-kube-api-access-jfsp8\") pod \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.161307 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-combined-ca-bundle\") pod \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.161567 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-config-data\") pod \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\" (UID: \"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.171034 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-kube-api-access-jfsp8" (OuterVolumeSpecName: "kube-api-access-jfsp8") pod "99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" (UID: "99ddaa72-38e7-4bb8-9f22-fe7e747e12dc"). InnerVolumeSpecName "kube-api-access-jfsp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.216169 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" (UID: "99ddaa72-38e7-4bb8-9f22-fe7e747e12dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.220090 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-config-data" (OuterVolumeSpecName: "config-data") pod "99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" (UID: "99ddaa72-38e7-4bb8-9f22-fe7e747e12dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.265216 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.265291 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfsp8\" (UniqueName: \"kubernetes.io/projected/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-kube-api-access-jfsp8\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.265311 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.934792 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.979632 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-config-data\") pod \"9e3052fe-f13b-48f5-b285-9cae81db85a9\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.979679 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-combined-ca-bundle\") pod \"9e3052fe-f13b-48f5-b285-9cae81db85a9\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.979801 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd4xj\" (UniqueName: \"kubernetes.io/projected/9e3052fe-f13b-48f5-b285-9cae81db85a9-kube-api-access-kd4xj\") pod \"9e3052fe-f13b-48f5-b285-9cae81db85a9\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.979824 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e3052fe-f13b-48f5-b285-9cae81db85a9-logs\") pod \"9e3052fe-f13b-48f5-b285-9cae81db85a9\" (UID: \"9e3052fe-f13b-48f5-b285-9cae81db85a9\") " Feb 02 09:18:24 crc kubenswrapper[4720]: I0202 09:18:24.999764 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3052fe-f13b-48f5-b285-9cae81db85a9-logs" (OuterVolumeSpecName: "logs") pod "9e3052fe-f13b-48f5-b285-9cae81db85a9" (UID: "9e3052fe-f13b-48f5-b285-9cae81db85a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.002005 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3052fe-f13b-48f5-b285-9cae81db85a9-kube-api-access-kd4xj" (OuterVolumeSpecName: "kube-api-access-kd4xj") pod "9e3052fe-f13b-48f5-b285-9cae81db85a9" (UID: "9e3052fe-f13b-48f5-b285-9cae81db85a9"). InnerVolumeSpecName "kube-api-access-kd4xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.002548 4720 generic.go:334] "Generic (PLEG): container finished" podID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerID="04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f" exitCode=0 Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.002652 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.002652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e3052fe-f13b-48f5-b285-9cae81db85a9","Type":"ContainerDied","Data":"04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f"} Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.002756 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9e3052fe-f13b-48f5-b285-9cae81db85a9","Type":"ContainerDied","Data":"92507e60c4f37b4ba4c6f857763b372418604fcb12b406df711344b8197c050c"} Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.002774 4720 scope.go:117] "RemoveContainer" containerID="04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.011096 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.012083 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ddaa72-38e7-4bb8-9f22-fe7e747e12dc","Type":"ContainerDied","Data":"fefe8a783bfa1d1aa73385cec00043593bcf2c2fefb25df4df77513363b0e532"} Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.016828 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-config-data" (OuterVolumeSpecName: "config-data") pod "9e3052fe-f13b-48f5-b285-9cae81db85a9" (UID: "9e3052fe-f13b-48f5-b285-9cae81db85a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.022615 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e3052fe-f13b-48f5-b285-9cae81db85a9" (UID: "9e3052fe-f13b-48f5-b285-9cae81db85a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.039426 4720 scope.go:117] "RemoveContainer" containerID="091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.069020 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.083480 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd4xj\" (UniqueName: \"kubernetes.io/projected/9e3052fe-f13b-48f5-b285-9cae81db85a9-kube-api-access-kd4xj\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.083509 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e3052fe-f13b-48f5-b285-9cae81db85a9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.083519 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.083552 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e3052fe-f13b-48f5-b285-9cae81db85a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.104042 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.117506 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: E0202 09:18:25.117910 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-log" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.117929 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-log" Feb 02 09:18:25 crc kubenswrapper[4720]: E0202 09:18:25.117953 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" containerName="nova-scheduler-scheduler" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.117959 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" containerName="nova-scheduler-scheduler" Feb 02 09:18:25 crc kubenswrapper[4720]: E0202 09:18:25.117967 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-api" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.117974 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-api" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.118166 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" containerName="nova-scheduler-scheduler" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.118180 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-log" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.118192 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" containerName="nova-api-api" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.118799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.120376 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.137771 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.140670 4720 scope.go:117] "RemoveContainer" containerID="04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f" Feb 02 09:18:25 crc kubenswrapper[4720]: E0202 09:18:25.141082 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f\": container with ID starting with 04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f not found: ID does not exist" containerID="04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.141113 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f"} err="failed to get container status \"04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f\": rpc error: code = NotFound desc = could not find container \"04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f\": container with ID starting with 04e14772aaa5dd1079b96c8a017fa636d6f2bd5079cb3ea76abf8830d55c302f not found: ID does not exist" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.141137 4720 scope.go:117] "RemoveContainer" containerID="091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0" Feb 02 09:18:25 crc kubenswrapper[4720]: E0202 09:18:25.141549 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0\": container with ID starting with 091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0 not found: ID does not exist" containerID="091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.141598 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0"} err="failed to get container status \"091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0\": rpc error: code = NotFound desc = could not find container \"091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0\": container with ID starting with 091b162cbf38c4b87b5ec10c6fe927ad764f9ae7f60aff6a0b3890aa22b4f9c0 not found: ID does not exist" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.141631 4720 scope.go:117] "RemoveContainer" containerID="3d42939531a4d696e2c1a81a9270c86c43a45ed5e36276f55e80094a99815e81" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.286701 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4t29\" (UniqueName: \"kubernetes.io/projected/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-kube-api-access-n4t29\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.286781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-config-data\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.286822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.345937 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.359618 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.367866 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.369332 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.372454 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.377928 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.388506 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-config-data\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.388570 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.388697 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4t29\" (UniqueName: \"kubernetes.io/projected/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-kube-api-access-n4t29\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.393334 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.399255 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-config-data\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.403933 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4t29\" (UniqueName: \"kubernetes.io/projected/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-kube-api-access-n4t29\") pod \"nova-scheduler-0\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.449013 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.490289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnbd\" (UniqueName: \"kubernetes.io/projected/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-kube-api-access-nwnbd\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.490448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-logs\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.490498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-config-data\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.493674 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.595492 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-logs\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.596018 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-config-data\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.596068 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-logs\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.596120 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.596447 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnbd\" (UniqueName: \"kubernetes.io/projected/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-kube-api-access-nwnbd\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.600090 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-config-data\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.612431 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.622397 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnbd\" (UniqueName: \"kubernetes.io/projected/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-kube-api-access-nwnbd\") pod \"nova-api-0\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.684408 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:25 crc kubenswrapper[4720]: I0202 09:18:25.890541 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:18:25 crc kubenswrapper[4720]: W0202 09:18:25.894770 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode37a1c88_77d0_44bc_a53c_bf98e4bd8f7c.slice/crio-136b8ae6c3c90d7c9eb98751fa1e4320cd4431db97a2bb698213510a9da2534c WatchSource:0}: Error finding container 136b8ae6c3c90d7c9eb98751fa1e4320cd4431db97a2bb698213510a9da2534c: Status 404 returned error can't find the container with id 136b8ae6c3c90d7c9eb98751fa1e4320cd4431db97a2bb698213510a9da2534c Feb 02 09:18:26 crc kubenswrapper[4720]: I0202 09:18:26.022161 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c","Type":"ContainerStarted","Data":"136b8ae6c3c90d7c9eb98751fa1e4320cd4431db97a2bb698213510a9da2534c"} Feb 02 09:18:26 crc kubenswrapper[4720]: I0202 09:18:26.135774 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:26 crc kubenswrapper[4720]: I0202 09:18:26.902733 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ddaa72-38e7-4bb8-9f22-fe7e747e12dc" path="/var/lib/kubelet/pods/99ddaa72-38e7-4bb8-9f22-fe7e747e12dc/volumes" Feb 02 09:18:26 crc kubenswrapper[4720]: I0202 09:18:26.903913 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3052fe-f13b-48f5-b285-9cae81db85a9" path="/var/lib/kubelet/pods/9e3052fe-f13b-48f5-b285-9cae81db85a9/volumes" Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.031485 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febdb749-6ac6-4c13-b6cf-7155c7aefe9d","Type":"ContainerStarted","Data":"fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514"} Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.031528 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febdb749-6ac6-4c13-b6cf-7155c7aefe9d","Type":"ContainerStarted","Data":"77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f"} Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.031537 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febdb749-6ac6-4c13-b6cf-7155c7aefe9d","Type":"ContainerStarted","Data":"3bdccb2d4a9064bf016fc4840cc6a332307e38863e99d45d0ce79a9a80fb9a15"} Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.033544 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c","Type":"ContainerStarted","Data":"8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7"} Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.060854 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.06083579 podStartE2EDuration="2.06083579s" podCreationTimestamp="2026-02-02 09:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:27.053151948 +0000 UTC m=+1340.908777504" watchObservedRunningTime="2026-02-02 09:18:27.06083579 +0000 UTC m=+1340.916461346" Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.079490 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.079472732 podStartE2EDuration="2.079472732s" podCreationTimestamp="2026-02-02 09:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:27.073560321 +0000 UTC m=+1340.929185957" watchObservedRunningTime="2026-02-02 09:18:27.079472732 +0000 UTC m=+1340.935098288" Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.195797 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.374808 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 09:18:27 crc kubenswrapper[4720]: I0202 09:18:27.375059 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 09:18:30 crc kubenswrapper[4720]: I0202 09:18:30.338127 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 09:18:30 crc kubenswrapper[4720]: I0202 09:18:30.449420 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 09:18:30 crc kubenswrapper[4720]: I0202 09:18:30.829455 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:18:30 crc kubenswrapper[4720]: I0202 09:18:30.829811 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" containerName="kube-state-metrics" containerID="cri-o://e44c1ea25149a26f3ac38db981f5b3ff788d28903df82739cf3e2405c31b3b2a" gracePeriod=30 Feb 02 09:18:31 crc kubenswrapper[4720]: I0202 09:18:31.096450 4720 generic.go:334] "Generic (PLEG): container finished" podID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" containerID="e44c1ea25149a26f3ac38db981f5b3ff788d28903df82739cf3e2405c31b3b2a" exitCode=2 Feb 02 09:18:31 crc kubenswrapper[4720]: I0202 09:18:31.096490 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26b9fd3f-f554-4920-ba34-8e8dc34b78ed","Type":"ContainerDied","Data":"e44c1ea25149a26f3ac38db981f5b3ff788d28903df82739cf3e2405c31b3b2a"} Feb 02 09:18:31 crc kubenswrapper[4720]: I0202 09:18:31.422585 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 09:18:31 crc kubenswrapper[4720]: I0202 09:18:31.573227 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvp2v\" (UniqueName: \"kubernetes.io/projected/26b9fd3f-f554-4920-ba34-8e8dc34b78ed-kube-api-access-mvp2v\") pod \"26b9fd3f-f554-4920-ba34-8e8dc34b78ed\" (UID: \"26b9fd3f-f554-4920-ba34-8e8dc34b78ed\") " Feb 02 09:18:31 crc kubenswrapper[4720]: I0202 09:18:31.579655 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b9fd3f-f554-4920-ba34-8e8dc34b78ed-kube-api-access-mvp2v" (OuterVolumeSpecName: "kube-api-access-mvp2v") pod "26b9fd3f-f554-4920-ba34-8e8dc34b78ed" (UID: "26b9fd3f-f554-4920-ba34-8e8dc34b78ed"). InnerVolumeSpecName "kube-api-access-mvp2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:31 crc kubenswrapper[4720]: I0202 09:18:31.675116 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvp2v\" (UniqueName: \"kubernetes.io/projected/26b9fd3f-f554-4920-ba34-8e8dc34b78ed-kube-api-access-mvp2v\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.110563 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26b9fd3f-f554-4920-ba34-8e8dc34b78ed","Type":"ContainerDied","Data":"9f4a4a91f2c2598e4a1d3cf698dcdecfa87a0c864e2bb2605da79cd467fe0419"} Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.110867 4720 scope.go:117] "RemoveContainer" containerID="e44c1ea25149a26f3ac38db981f5b3ff788d28903df82739cf3e2405c31b3b2a" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.110904 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.164516 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.180415 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.193027 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:18:32 crc kubenswrapper[4720]: E0202 09:18:32.193548 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" containerName="kube-state-metrics" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.193570 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" containerName="kube-state-metrics" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.193805 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" containerName="kube-state-metrics" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.194635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.196954 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.197149 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.201936 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.374825 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.374864 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.387968 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p99b\" (UniqueName: \"kubernetes.io/projected/17fd5894-4433-498d-8d28-b2fa366949d3-kube-api-access-4p99b\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.388059 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.388110 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.388135 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.489494 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.489541 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.489561 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.489725 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p99b\" (UniqueName: \"kubernetes.io/projected/17fd5894-4433-498d-8d28-b2fa366949d3-kube-api-access-4p99b\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.494705 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.495996 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.520813 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/17fd5894-4433-498d-8d28-b2fa366949d3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.520986 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p99b\" (UniqueName: \"kubernetes.io/projected/17fd5894-4433-498d-8d28-b2fa366949d3-kube-api-access-4p99b\") pod \"kube-state-metrics-0\" (UID: \"17fd5894-4433-498d-8d28-b2fa366949d3\") " pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.701867 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.702181 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-central-agent" containerID="cri-o://80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9" gracePeriod=30 Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.702325 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="proxy-httpd" containerID="cri-o://f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0" gracePeriod=30 Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.702508 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="sg-core" containerID="cri-o://4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321" gracePeriod=30 Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.702399 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-notification-agent" containerID="cri-o://61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc" gracePeriod=30 Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.809763 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 09:18:32 crc kubenswrapper[4720]: I0202 09:18:32.912741 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b9fd3f-f554-4920-ba34-8e8dc34b78ed" path="/var/lib/kubelet/pods/26b9fd3f-f554-4920-ba34-8e8dc34b78ed/volumes" Feb 02 09:18:33 crc kubenswrapper[4720]: I0202 09:18:33.130266 4720 generic.go:334] "Generic (PLEG): container finished" podID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerID="f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0" exitCode=0 Feb 02 09:18:33 crc kubenswrapper[4720]: I0202 09:18:33.130309 4720 generic.go:334] "Generic (PLEG): container finished" podID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerID="4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321" exitCode=2 Feb 02 09:18:33 crc kubenswrapper[4720]: I0202 09:18:33.130340 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerDied","Data":"f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0"} Feb 02 09:18:33 crc kubenswrapper[4720]: I0202 09:18:33.130378 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerDied","Data":"4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321"} Feb 02 09:18:33 crc kubenswrapper[4720]: I0202 09:18:33.387042 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:18:33 crc kubenswrapper[4720]: I0202 09:18:33.387055 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:18:34 crc kubenswrapper[4720]: I0202 09:18:34.122772 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 09:18:34 crc kubenswrapper[4720]: I0202 09:18:34.150434 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17fd5894-4433-498d-8d28-b2fa366949d3","Type":"ContainerStarted","Data":"de89a7dfa2595024e18b7111fe3570d0fb7aa8d50381aa1f8a63d4d66610cb33"} Feb 02 09:18:34 crc kubenswrapper[4720]: I0202 09:18:34.152941 4720 generic.go:334] "Generic (PLEG): container finished" podID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerID="80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9" exitCode=0 Feb 02 09:18:34 crc kubenswrapper[4720]: I0202 09:18:34.152973 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerDied","Data":"80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9"} Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.045547 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144498 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-sg-core-conf-yaml\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144590 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-config-data\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144624 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kf9j\" (UniqueName: \"kubernetes.io/projected/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-kube-api-access-9kf9j\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144645 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-scripts\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144728 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-run-httpd\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144874 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-combined-ca-bundle\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.144935 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-log-httpd\") pod \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\" (UID: \"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd\") " Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.145798 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.145966 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.146167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.150900 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-kube-api-access-9kf9j" (OuterVolumeSpecName: "kube-api-access-9kf9j") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "kube-api-access-9kf9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.156133 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-scripts" (OuterVolumeSpecName: "scripts") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.176678 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"17fd5894-4433-498d-8d28-b2fa366949d3","Type":"ContainerStarted","Data":"91a42b2fe90b72a054f6b01a84434856ea605de6e54a9a37bcac6ea3f011a2db"} Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.176797 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.188609 4720 generic.go:334] "Generic (PLEG): container finished" podID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerID="61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc" exitCode=0 Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.188704 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerDied","Data":"61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc"} Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.188775 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55c7047b-9eec-4c99-a0d8-3ffcd63a11cd","Type":"ContainerDied","Data":"f672603b7c6c5ca06a0c61f4dfd761e607df0375303f8641b1f6c3cd23045e87"} Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.188794 4720 scope.go:117] "RemoveContainer" containerID="f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.188955 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.197202 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.850781612 podStartE2EDuration="3.197182565s" podCreationTimestamp="2026-02-02 09:18:32 +0000 UTC" firstStartedPulling="2026-02-02 09:18:34.114374212 +0000 UTC m=+1347.969999768" lastFinishedPulling="2026-02-02 09:18:34.460775125 +0000 UTC m=+1348.316400721" observedRunningTime="2026-02-02 09:18:35.195924996 +0000 UTC m=+1349.051550552" watchObservedRunningTime="2026-02-02 09:18:35.197182565 +0000 UTC m=+1349.052808121" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.201560 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.214759 4720 scope.go:117] "RemoveContainer" containerID="4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.229437 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.247491 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.247527 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kf9j\" (UniqueName: \"kubernetes.io/projected/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-kube-api-access-9kf9j\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.247537 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.247546 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.247556 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.259963 4720 scope.go:117] "RemoveContainer" containerID="61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.268504 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-config-data" (OuterVolumeSpecName: "config-data") pod "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" (UID: "55c7047b-9eec-4c99-a0d8-3ffcd63a11cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.284756 4720 scope.go:117] "RemoveContainer" containerID="80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.310563 4720 scope.go:117] "RemoveContainer" containerID="f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.310950 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0\": container with ID starting with f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0 not found: ID does not exist" containerID="f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.310998 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0"} err="failed to get container status \"f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0\": rpc error: code = NotFound desc = could not find container \"f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0\": container with ID starting with f037c1f7c5baa2bf82a0e2264d8f45f14c519c022680ec04075f3a1640b6d1f0 not found: ID does not exist" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.311024 4720 scope.go:117] "RemoveContainer" containerID="4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.311344 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321\": container with ID starting with 4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321 not found: ID does not exist" containerID="4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.311376 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321"} err="failed to get container status \"4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321\": rpc error: code = NotFound desc = could not find container \"4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321\": container with ID starting with 4390884df5e0a35e6968774f19519efba5ee6c277958fa38981e1c52ea1ab321 not found: ID does not exist" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.311399 4720 scope.go:117] "RemoveContainer" containerID="61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.311737 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc\": container with ID starting with 61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc not found: ID does not exist" containerID="61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.311779 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc"} err="failed to get container status \"61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc\": rpc error: code = NotFound desc = could not find container \"61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc\": container with ID starting with 61514543ec15373da507b4cb668269d4674e1a3f1a8fabb62b2b9092ff34d7dc not found: ID does not exist" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.311806 4720 scope.go:117] "RemoveContainer" containerID="80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.312079 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9\": container with ID starting with 80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9 not found: ID does not exist" containerID="80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.312103 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9"} err="failed to get container status \"80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9\": rpc error: code = NotFound desc = could not find container \"80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9\": container with ID starting with 80176c3b2487be5162a0f6e81afabc794a3d666bd353fc200706737c268f15c9 not found: ID does not exist" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.349225 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.449765 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.486852 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.568051 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.577873 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.584537 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.584951 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-central-agent" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.584968 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-central-agent" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.584982 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-notification-agent" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.584989 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-notification-agent" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.585011 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="proxy-httpd" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.585018 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="proxy-httpd" Feb 02 09:18:35 crc kubenswrapper[4720]: E0202 09:18:35.585033 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="sg-core" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.585040 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="sg-core" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.585207 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="proxy-httpd" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.585222 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-central-agent" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.585239 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="ceilometer-notification-agent" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.585251 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" containerName="sg-core" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.586826 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.590406 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.590483 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.590591 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.610283 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.684920 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.684970 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757172 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757442 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-run-httpd\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757550 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757618 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5k7\" (UniqueName: \"kubernetes.io/projected/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-kube-api-access-gf5k7\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757693 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757789 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-config-data\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.757855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-scripts\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.758131 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-log-httpd\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-config-data\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860192 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-scripts\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860214 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-log-httpd\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860287 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860392 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-run-httpd\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860438 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.860473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5k7\" (UniqueName: \"kubernetes.io/projected/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-kube-api-access-gf5k7\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.861286 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-run-httpd\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.862075 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-log-httpd\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.864380 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.865085 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-scripts\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.866071 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.866348 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-config-data\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.877029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.877588 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5k7\" (UniqueName: \"kubernetes.io/projected/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-kube-api-access-gf5k7\") pod \"ceilometer-0\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " pod="openstack/ceilometer-0" Feb 02 09:18:35 crc kubenswrapper[4720]: I0202 09:18:35.906434 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:36 crc kubenswrapper[4720]: I0202 09:18:36.229252 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 09:18:36 crc kubenswrapper[4720]: I0202 09:18:36.353052 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:36 crc kubenswrapper[4720]: I0202 09:18:36.768094 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 09:18:36 crc kubenswrapper[4720]: I0202 09:18:36.768206 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 09:18:36 crc kubenswrapper[4720]: I0202 09:18:36.903626 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c7047b-9eec-4c99-a0d8-3ffcd63a11cd" path="/var/lib/kubelet/pods/55c7047b-9eec-4c99-a0d8-3ffcd63a11cd/volumes" Feb 02 09:18:37 crc kubenswrapper[4720]: I0202 09:18:37.211092 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerStarted","Data":"f68f1a5e1e314c84943a523259ebab5201398fa5582b4d1fe38e5db13ef6b7ee"} Feb 02 09:18:37 crc kubenswrapper[4720]: I0202 09:18:37.211467 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerStarted","Data":"fda5e4455ba5825b1679bd0a4726e65edb3644712af7a939e88d85b93d132716"} Feb 02 09:18:38 crc kubenswrapper[4720]: I0202 09:18:38.225350 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerStarted","Data":"bb044bcf1e937eef84a9af6be69da2d9411b3a20eb298a375d173325fb37e0d9"} Feb 02 09:18:39 crc kubenswrapper[4720]: I0202 09:18:39.243357 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerStarted","Data":"cef0c096ce28470c19df6b9fdc08acd09d2c6424ecb8cf35503bdb8f241b7ea2"} Feb 02 09:18:41 crc kubenswrapper[4720]: I0202 09:18:41.266183 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerStarted","Data":"a8a428d81c2a4b24880f0e97c63c021c20e172d481085396c2cd3726e26b8965"} Feb 02 09:18:41 crc kubenswrapper[4720]: I0202 09:18:41.267785 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:18:41 crc kubenswrapper[4720]: I0202 09:18:41.300034 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.97254999 podStartE2EDuration="6.300006176s" podCreationTimestamp="2026-02-02 09:18:35 +0000 UTC" firstStartedPulling="2026-02-02 09:18:36.366157942 +0000 UTC m=+1350.221783518" lastFinishedPulling="2026-02-02 09:18:40.693614108 +0000 UTC m=+1354.549239704" observedRunningTime="2026-02-02 09:18:41.292943328 +0000 UTC m=+1355.148568904" watchObservedRunningTime="2026-02-02 09:18:41.300006176 +0000 UTC m=+1355.155631772" Feb 02 09:18:42 crc kubenswrapper[4720]: I0202 09:18:42.381345 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 09:18:42 crc kubenswrapper[4720]: I0202 09:18:42.383494 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 09:18:42 crc kubenswrapper[4720]: I0202 09:18:42.395981 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 09:18:42 crc kubenswrapper[4720]: I0202 09:18:42.857690 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 09:18:42 crc kubenswrapper[4720]: E0202 09:18:42.938065 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82d9de3f_df7e_4704_9faf_01d3120135fd.slice/crio-4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093.scope\": RecentStats: unable to find data in memory cache]" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.269417 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.288295 4720 generic.go:334] "Generic (PLEG): container finished" podID="82d9de3f-df7e-4704-9faf-01d3120135fd" containerID="4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093" exitCode=137 Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.288522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82d9de3f-df7e-4704-9faf-01d3120135fd","Type":"ContainerDied","Data":"4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093"} Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.288613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"82d9de3f-df7e-4704-9faf-01d3120135fd","Type":"ContainerDied","Data":"e440ba9f8087439b2e7b49a3c2b1e83620b04ae4e376d3cb302d860dfc8e9afe"} Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.288648 4720 scope.go:117] "RemoveContainer" containerID="4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.288904 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.306327 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.316656 4720 scope.go:117] "RemoveContainer" containerID="4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093" Feb 02 09:18:43 crc kubenswrapper[4720]: E0202 09:18:43.317092 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093\": container with ID starting with 4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093 not found: ID does not exist" containerID="4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.317148 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093"} err="failed to get container status \"4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093\": rpc error: code = NotFound desc = could not find container \"4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093\": container with ID starting with 4ffbd9cd427e5d444facdee1c9e6a5a9015b78f442c1f76283ab4b862868d093 not found: ID does not exist" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.326957 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-config-data\") pod \"82d9de3f-df7e-4704-9faf-01d3120135fd\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.327038 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-combined-ca-bundle\") pod \"82d9de3f-df7e-4704-9faf-01d3120135fd\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.327188 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vh8s\" (UniqueName: \"kubernetes.io/projected/82d9de3f-df7e-4704-9faf-01d3120135fd-kube-api-access-4vh8s\") pod \"82d9de3f-df7e-4704-9faf-01d3120135fd\" (UID: \"82d9de3f-df7e-4704-9faf-01d3120135fd\") " Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.335545 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d9de3f-df7e-4704-9faf-01d3120135fd-kube-api-access-4vh8s" (OuterVolumeSpecName: "kube-api-access-4vh8s") pod "82d9de3f-df7e-4704-9faf-01d3120135fd" (UID: "82d9de3f-df7e-4704-9faf-01d3120135fd"). InnerVolumeSpecName "kube-api-access-4vh8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.400976 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82d9de3f-df7e-4704-9faf-01d3120135fd" (UID: "82d9de3f-df7e-4704-9faf-01d3120135fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.407130 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-config-data" (OuterVolumeSpecName: "config-data") pod "82d9de3f-df7e-4704-9faf-01d3120135fd" (UID: "82d9de3f-df7e-4704-9faf-01d3120135fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.431621 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vh8s\" (UniqueName: \"kubernetes.io/projected/82d9de3f-df7e-4704-9faf-01d3120135fd-kube-api-access-4vh8s\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.431652 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.431664 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d9de3f-df7e-4704-9faf-01d3120135fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.623472 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.635835 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.645736 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:43 crc kubenswrapper[4720]: E0202 09:18:43.646613 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d9de3f-df7e-4704-9faf-01d3120135fd" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.646635 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d9de3f-df7e-4704-9faf-01d3120135fd" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.647134 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d9de3f-df7e-4704-9faf-01d3120135fd" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.649765 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.656755 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.657297 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.657569 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.682931 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.737708 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.738042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptm8\" (UniqueName: \"kubernetes.io/projected/c9671844-9042-4f97-8d10-12a7e1794c3e-kube-api-access-bptm8\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.738124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.738221 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.738348 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.840388 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.840687 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptm8\" (UniqueName: \"kubernetes.io/projected/c9671844-9042-4f97-8d10-12a7e1794c3e-kube-api-access-bptm8\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.840827 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.840959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.841097 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.846432 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.846792 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.847208 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.854505 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9671844-9042-4f97-8d10-12a7e1794c3e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:43 crc kubenswrapper[4720]: I0202 09:18:43.863997 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptm8\" (UniqueName: \"kubernetes.io/projected/c9671844-9042-4f97-8d10-12a7e1794c3e-kube-api-access-bptm8\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9671844-9042-4f97-8d10-12a7e1794c3e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:44 crc kubenswrapper[4720]: I0202 09:18:44.001114 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:44 crc kubenswrapper[4720]: I0202 09:18:44.513117 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 09:18:44 crc kubenswrapper[4720]: W0202 09:18:44.520130 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9671844_9042_4f97_8d10_12a7e1794c3e.slice/crio-2a36eba503ea15eb9f235c06b1f9d6a33c46b70af13dad231883ddba37ba0c8d WatchSource:0}: Error finding container 2a36eba503ea15eb9f235c06b1f9d6a33c46b70af13dad231883ddba37ba0c8d: Status 404 returned error can't find the container with id 2a36eba503ea15eb9f235c06b1f9d6a33c46b70af13dad231883ddba37ba0c8d Feb 02 09:18:44 crc kubenswrapper[4720]: I0202 09:18:44.900364 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d9de3f-df7e-4704-9faf-01d3120135fd" path="/var/lib/kubelet/pods/82d9de3f-df7e-4704-9faf-01d3120135fd/volumes" Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.316718 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9671844-9042-4f97-8d10-12a7e1794c3e","Type":"ContainerStarted","Data":"5da467f64c50c4d0ecca7391b74c84f1953093eab7c697769a61db96af3e9e73"} Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.316797 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9671844-9042-4f97-8d10-12a7e1794c3e","Type":"ContainerStarted","Data":"2a36eba503ea15eb9f235c06b1f9d6a33c46b70af13dad231883ddba37ba0c8d"} Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.346804 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.346777556 podStartE2EDuration="2.346777556s" podCreationTimestamp="2026-02-02 09:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:45.346629993 +0000 UTC m=+1359.202255559" watchObservedRunningTime="2026-02-02 09:18:45.346777556 +0000 UTC m=+1359.202403152" Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.689321 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.690311 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.693230 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 09:18:45 crc kubenswrapper[4720]: I0202 09:18:45.697973 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.332592 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.337167 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.537363 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-5jrzr"] Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.538829 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.567951 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-5jrzr"] Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.600704 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-swift-storage-0\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.600773 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-nb\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.600804 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvk4\" (UniqueName: \"kubernetes.io/projected/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-kube-api-access-qqvk4\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.600839 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-config\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.600868 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-sb\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.600922 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-svc\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703005 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-sb\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-svc\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703212 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-swift-storage-0\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703276 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-nb\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703314 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvk4\" (UniqueName: \"kubernetes.io/projected/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-kube-api-access-qqvk4\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703365 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-config\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.703967 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-sb\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.704016 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-svc\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.704310 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-config\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.704559 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-nb\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.704802 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-swift-storage-0\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.721552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvk4\" (UniqueName: \"kubernetes.io/projected/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-kube-api-access-qqvk4\") pod \"dnsmasq-dns-6559f4fbd7-5jrzr\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:46 crc kubenswrapper[4720]: I0202 09:18:46.885315 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:47 crc kubenswrapper[4720]: I0202 09:18:47.289923 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-5jrzr"] Feb 02 09:18:47 crc kubenswrapper[4720]: I0202 09:18:47.345549 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" event={"ID":"45c0a1be-8f81-4819-bd4b-29ba05a8bce2","Type":"ContainerStarted","Data":"2b1014bbce5d56dd5c30ed7400d3f80386796ffde987f65b02314eb74b7f1703"} Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.356116 4720 generic.go:334] "Generic (PLEG): container finished" podID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerID="20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209" exitCode=0 Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.356295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" event={"ID":"45c0a1be-8f81-4819-bd4b-29ba05a8bce2","Type":"ContainerDied","Data":"20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209"} Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.808060 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.808579 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-central-agent" containerID="cri-o://f68f1a5e1e314c84943a523259ebab5201398fa5582b4d1fe38e5db13ef6b7ee" gracePeriod=30 Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.808655 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="sg-core" containerID="cri-o://cef0c096ce28470c19df6b9fdc08acd09d2c6424ecb8cf35503bdb8f241b7ea2" gracePeriod=30 Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.808658 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="proxy-httpd" containerID="cri-o://a8a428d81c2a4b24880f0e97c63c021c20e172d481085396c2cd3726e26b8965" gracePeriod=30 Feb 02 09:18:48 crc kubenswrapper[4720]: I0202 09:18:48.808664 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-notification-agent" containerID="cri-o://bb044bcf1e937eef84a9af6be69da2d9411b3a20eb298a375d173325fb37e0d9" gracePeriod=30 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.002229 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.062385 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.366534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" event={"ID":"45c0a1be-8f81-4819-bd4b-29ba05a8bce2","Type":"ContainerStarted","Data":"0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4"} Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.366638 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.369783 4720 generic.go:334] "Generic (PLEG): container finished" podID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerID="a8a428d81c2a4b24880f0e97c63c021c20e172d481085396c2cd3726e26b8965" exitCode=0 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.369810 4720 generic.go:334] "Generic (PLEG): container finished" podID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerID="cef0c096ce28470c19df6b9fdc08acd09d2c6424ecb8cf35503bdb8f241b7ea2" exitCode=2 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.369818 4720 generic.go:334] "Generic (PLEG): container finished" podID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerID="bb044bcf1e937eef84a9af6be69da2d9411b3a20eb298a375d173325fb37e0d9" exitCode=0 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.369825 4720 generic.go:334] "Generic (PLEG): container finished" podID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerID="f68f1a5e1e314c84943a523259ebab5201398fa5582b4d1fe38e5db13ef6b7ee" exitCode=0 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.369986 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-log" containerID="cri-o://77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f" gracePeriod=30 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.370061 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerDied","Data":"a8a428d81c2a4b24880f0e97c63c021c20e172d481085396c2cd3726e26b8965"} Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.370086 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerDied","Data":"cef0c096ce28470c19df6b9fdc08acd09d2c6424ecb8cf35503bdb8f241b7ea2"} Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.370096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerDied","Data":"bb044bcf1e937eef84a9af6be69da2d9411b3a20eb298a375d173325fb37e0d9"} Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.370105 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerDied","Data":"f68f1a5e1e314c84943a523259ebab5201398fa5582b4d1fe38e5db13ef6b7ee"} Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.370186 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-api" containerID="cri-o://fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514" gracePeriod=30 Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.387966 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" podStartSLOduration=3.386551921 podStartE2EDuration="3.386551921s" podCreationTimestamp="2026-02-02 09:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:49.383762263 +0000 UTC m=+1363.239387819" watchObservedRunningTime="2026-02-02 09:18:49.386551921 +0000 UTC m=+1363.242177487" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.561241 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688079 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-config-data\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688125 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-sg-core-conf-yaml\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688157 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-ceilometer-tls-certs\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688179 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf5k7\" (UniqueName: \"kubernetes.io/projected/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-kube-api-access-gf5k7\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688267 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-run-httpd\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688294 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-scripts\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688335 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-combined-ca-bundle\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.688391 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-log-httpd\") pod \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\" (UID: \"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92\") " Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.689098 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.689213 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.695015 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-kube-api-access-gf5k7" (OuterVolumeSpecName: "kube-api-access-gf5k7") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "kube-api-access-gf5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.698854 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-scripts" (OuterVolumeSpecName: "scripts") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.720125 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.771910 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.775025 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790061 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790097 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf5k7\" (UniqueName: \"kubernetes.io/projected/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-kube-api-access-gf5k7\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790108 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790117 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790125 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790134 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.790142 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.811760 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-config-data" (OuterVolumeSpecName: "config-data") pod "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" (UID: "be2f3f65-68c9-4ea4-8b1b-4de61ad0de92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:49 crc kubenswrapper[4720]: I0202 09:18:49.891813 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.383936 4720 generic.go:334] "Generic (PLEG): container finished" podID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerID="77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f" exitCode=143 Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.384019 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febdb749-6ac6-4c13-b6cf-7155c7aefe9d","Type":"ContainerDied","Data":"77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f"} Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.387628 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be2f3f65-68c9-4ea4-8b1b-4de61ad0de92","Type":"ContainerDied","Data":"fda5e4455ba5825b1679bd0a4726e65edb3644712af7a939e88d85b93d132716"} Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.387690 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.387705 4720 scope.go:117] "RemoveContainer" containerID="a8a428d81c2a4b24880f0e97c63c021c20e172d481085396c2cd3726e26b8965" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.415092 4720 scope.go:117] "RemoveContainer" containerID="cef0c096ce28470c19df6b9fdc08acd09d2c6424ecb8cf35503bdb8f241b7ea2" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.453622 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.478811 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.479396 4720 scope.go:117] "RemoveContainer" containerID="bb044bcf1e937eef84a9af6be69da2d9411b3a20eb298a375d173325fb37e0d9" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.511822 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:50 crc kubenswrapper[4720]: E0202 09:18:50.512742 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="proxy-httpd" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.512764 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="proxy-httpd" Feb 02 09:18:50 crc kubenswrapper[4720]: E0202 09:18:50.513028 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-notification-agent" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513048 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-notification-agent" Feb 02 09:18:50 crc kubenswrapper[4720]: E0202 09:18:50.513076 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-central-agent" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513085 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-central-agent" Feb 02 09:18:50 crc kubenswrapper[4720]: E0202 09:18:50.513100 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="sg-core" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513108 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="sg-core" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513349 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-central-agent" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513490 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="proxy-httpd" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513509 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="sg-core" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.513524 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" containerName="ceilometer-notification-agent" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.520637 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.523868 4720 scope.go:117] "RemoveContainer" containerID="f68f1a5e1e314c84943a523259ebab5201398fa5582b4d1fe38e5db13ef6b7ee" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.525989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.526194 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.526247 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.547194 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.606417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-log-httpd\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.606658 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vch\" (UniqueName: \"kubernetes.io/projected/77fa1967-e56b-46c3-be4c-c62e314854bf-kube-api-access-r2vch\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.606846 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-config-data\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.607015 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.607132 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.607199 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.607262 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-scripts\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.607341 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-run-httpd\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.619290 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:50 crc kubenswrapper[4720]: E0202 09:18:50.620506 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-r2vch log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="77fa1967-e56b-46c3-be4c-c62e314854bf" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709425 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-log-httpd\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709521 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vch\" (UniqueName: \"kubernetes.io/projected/77fa1967-e56b-46c3-be4c-c62e314854bf-kube-api-access-r2vch\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709574 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-config-data\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709616 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709644 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709671 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709690 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-scripts\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.709710 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-run-httpd\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.710192 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-run-httpd\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.710247 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-log-httpd\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.715656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.716089 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-scripts\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.717804 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-config-data\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.722452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.724423 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.740978 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vch\" (UniqueName: \"kubernetes.io/projected/77fa1967-e56b-46c3-be4c-c62e314854bf-kube-api-access-r2vch\") pod \"ceilometer-0\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " pod="openstack/ceilometer-0" Feb 02 09:18:50 crc kubenswrapper[4720]: I0202 09:18:50.899250 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2f3f65-68c9-4ea4-8b1b-4de61ad0de92" path="/var/lib/kubelet/pods/be2f3f65-68c9-4ea4-8b1b-4de61ad0de92/volumes" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.405033 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.420209 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.495032 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-combined-ca-bundle\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.501094 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.597112 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-sg-core-conf-yaml\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.597437 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-config-data\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.597714 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-ceilometer-tls-certs\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.597785 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-run-httpd\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.597822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-log-httpd\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.597905 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2vch\" (UniqueName: \"kubernetes.io/projected/77fa1967-e56b-46c3-be4c-c62e314854bf-kube-api-access-r2vch\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.598339 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-scripts\") pod \"77fa1967-e56b-46c3-be4c-c62e314854bf\" (UID: \"77fa1967-e56b-46c3-be4c-c62e314854bf\") " Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.598335 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.598363 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.598779 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.598792 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.598802 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77fa1967-e56b-46c3-be4c-c62e314854bf-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.600775 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fa1967-e56b-46c3-be4c-c62e314854bf-kube-api-access-r2vch" (OuterVolumeSpecName: "kube-api-access-r2vch") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "kube-api-access-r2vch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.602238 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-config-data" (OuterVolumeSpecName: "config-data") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.602286 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.603719 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.604576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-scripts" (OuterVolumeSpecName: "scripts") pod "77fa1967-e56b-46c3-be4c-c62e314854bf" (UID: "77fa1967-e56b-46c3-be4c-c62e314854bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.700645 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.700683 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2vch\" (UniqueName: \"kubernetes.io/projected/77fa1967-e56b-46c3-be4c-c62e314854bf-kube-api-access-r2vch\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.700698 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.700710 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:51 crc kubenswrapper[4720]: I0202 09:18:51.700722 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fa1967-e56b-46c3-be4c-c62e314854bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.414095 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.476775 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.490670 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.520048 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.527969 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.532962 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.533219 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.533506 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.546897 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.618930 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m846d\" (UniqueName: \"kubernetes.io/projected/10257622-18ee-4e30-9625-328376f9c3f1-kube-api-access-m846d\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619330 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10257622-18ee-4e30-9625-328376f9c3f1-run-httpd\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619352 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10257622-18ee-4e30-9625-328376f9c3f1-log-httpd\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619423 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-scripts\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619471 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619502 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619548 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-config-data\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.619572 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.720934 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-scripts\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.720982 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721034 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-config-data\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721055 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721109 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m846d\" (UniqueName: \"kubernetes.io/projected/10257622-18ee-4e30-9625-328376f9c3f1-kube-api-access-m846d\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721156 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10257622-18ee-4e30-9625-328376f9c3f1-run-httpd\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721172 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10257622-18ee-4e30-9625-328376f9c3f1-log-httpd\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.721489 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10257622-18ee-4e30-9625-328376f9c3f1-log-httpd\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.723152 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10257622-18ee-4e30-9625-328376f9c3f1-run-httpd\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.727606 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-scripts\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.731240 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.729048 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.738293 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-config-data\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.740021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10257622-18ee-4e30-9625-328376f9c3f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.741300 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m846d\" (UniqueName: \"kubernetes.io/projected/10257622-18ee-4e30-9625-328376f9c3f1-kube-api-access-m846d\") pod \"ceilometer-0\" (UID: \"10257622-18ee-4e30-9625-328376f9c3f1\") " pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.896918 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fa1967-e56b-46c3-be4c-c62e314854bf" path="/var/lib/kubelet/pods/77fa1967-e56b-46c3-be4c-c62e314854bf/volumes" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.961496 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 09:18:52 crc kubenswrapper[4720]: I0202 09:18:52.962034 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.134791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnbd\" (UniqueName: \"kubernetes.io/projected/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-kube-api-access-nwnbd\") pod \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.135431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-combined-ca-bundle\") pod \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.135819 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-config-data\") pod \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.136118 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-logs\") pod \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\" (UID: \"febdb749-6ac6-4c13-b6cf-7155c7aefe9d\") " Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.136530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-logs" (OuterVolumeSpecName: "logs") pod "febdb749-6ac6-4c13-b6cf-7155c7aefe9d" (UID: "febdb749-6ac6-4c13-b6cf-7155c7aefe9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.136987 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.143213 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-kube-api-access-nwnbd" (OuterVolumeSpecName: "kube-api-access-nwnbd") pod "febdb749-6ac6-4c13-b6cf-7155c7aefe9d" (UID: "febdb749-6ac6-4c13-b6cf-7155c7aefe9d"). InnerVolumeSpecName "kube-api-access-nwnbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.174173 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "febdb749-6ac6-4c13-b6cf-7155c7aefe9d" (UID: "febdb749-6ac6-4c13-b6cf-7155c7aefe9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.197341 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-config-data" (OuterVolumeSpecName: "config-data") pod "febdb749-6ac6-4c13-b6cf-7155c7aefe9d" (UID: "febdb749-6ac6-4c13-b6cf-7155c7aefe9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.242551 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnbd\" (UniqueName: \"kubernetes.io/projected/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-kube-api-access-nwnbd\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.242612 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.242624 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/febdb749-6ac6-4c13-b6cf-7155c7aefe9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.424064 4720 generic.go:334] "Generic (PLEG): container finished" podID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerID="fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514" exitCode=0 Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.424116 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febdb749-6ac6-4c13-b6cf-7155c7aefe9d","Type":"ContainerDied","Data":"fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514"} Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.424132 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.424147 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"febdb749-6ac6-4c13-b6cf-7155c7aefe9d","Type":"ContainerDied","Data":"3bdccb2d4a9064bf016fc4840cc6a332307e38863e99d45d0ce79a9a80fb9a15"} Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.424167 4720 scope.go:117] "RemoveContainer" containerID="fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.454742 4720 scope.go:117] "RemoveContainer" containerID="77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.464607 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.481449 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.486081 4720 scope.go:117] "RemoveContainer" containerID="fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514" Feb 02 09:18:53 crc kubenswrapper[4720]: E0202 09:18:53.486497 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514\": container with ID starting with fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514 not found: ID does not exist" containerID="fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.486527 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514"} err="failed to get container status \"fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514\": rpc error: code = NotFound desc = could not find container \"fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514\": container with ID starting with fcef5b3ef4f21b353fcee61fc4d155099e4885e7123425a0fd93d95d0faa3514 not found: ID does not exist" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.486548 4720 scope.go:117] "RemoveContainer" containerID="77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f" Feb 02 09:18:53 crc kubenswrapper[4720]: E0202 09:18:53.486866 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f\": container with ID starting with 77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f not found: ID does not exist" containerID="77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.486910 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f"} err="failed to get container status \"77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f\": rpc error: code = NotFound desc = could not find container \"77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f\": container with ID starting with 77ac5f5e4938dda28552b75d0f84784022cea8c4147c36a219e4e26e045d768f not found: ID does not exist" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.493089 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:53 crc kubenswrapper[4720]: E0202 09:18:53.493446 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-log" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.493462 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-log" Feb 02 09:18:53 crc kubenswrapper[4720]: E0202 09:18:53.493498 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-api" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.493504 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-api" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.493687 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-api" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.493705 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" containerName="nova-api-log" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.494663 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.499703 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.499742 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.499918 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.505203 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.513447 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.550340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgfq\" (UniqueName: \"kubernetes.io/projected/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-kube-api-access-cvgfq\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.550426 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-logs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.550449 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-public-tls-certs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.550512 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-config-data\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.550551 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.550577 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.652738 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgfq\" (UniqueName: \"kubernetes.io/projected/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-kube-api-access-cvgfq\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.652816 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-logs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.652840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-public-tls-certs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.652910 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-config-data\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.652949 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.652973 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.653464 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-logs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.658327 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.659025 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-config-data\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.662110 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.662414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-public-tls-certs\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.674407 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgfq\" (UniqueName: \"kubernetes.io/projected/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-kube-api-access-cvgfq\") pod \"nova-api-0\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " pod="openstack/nova-api-0" Feb 02 09:18:53 crc kubenswrapper[4720]: I0202 09:18:53.818013 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.001451 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.020997 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.314434 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:18:54 crc kubenswrapper[4720]: W0202 09:18:54.316758 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bd49c2a_a14b_4cbd_a0f9_060ca667f833.slice/crio-f7862cc093da570fbc68c893b80d82e009c28163b730135ca6063dea2d517eb1 WatchSource:0}: Error finding container f7862cc093da570fbc68c893b80d82e009c28163b730135ca6063dea2d517eb1: Status 404 returned error can't find the container with id f7862cc093da570fbc68c893b80d82e009c28163b730135ca6063dea2d517eb1 Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.435413 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bd49c2a-a14b-4cbd-a0f9-060ca667f833","Type":"ContainerStarted","Data":"f7862cc093da570fbc68c893b80d82e009c28163b730135ca6063dea2d517eb1"} Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.438356 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10257622-18ee-4e30-9625-328376f9c3f1","Type":"ContainerStarted","Data":"0f8a2f6c9ce32dcb6ed3f37d7d9204f435b7672778dba6e3833a141b1eac5c40"} Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.438391 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10257622-18ee-4e30-9625-328376f9c3f1","Type":"ContainerStarted","Data":"ac82838a596d0e4d71b8df069d3a61ba640887703a7b3ad1782e941c2b1a225f"} Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.475553 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.628271 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dnfdm"] Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.629422 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.642399 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.642634 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.659684 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dnfdm"] Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.786081 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-config-data\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.786165 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-scripts\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.786357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlb5\" (UniqueName: \"kubernetes.io/projected/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-kube-api-access-vtlb5\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.786600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.888067 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.888405 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-config-data\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.888487 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-scripts\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.888567 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlb5\" (UniqueName: \"kubernetes.io/projected/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-kube-api-access-vtlb5\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.895563 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-scripts\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.895697 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-config-data\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.898058 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.904160 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlb5\" (UniqueName: \"kubernetes.io/projected/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-kube-api-access-vtlb5\") pod \"nova-cell1-cell-mapping-dnfdm\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:54 crc kubenswrapper[4720]: I0202 09:18:54.904163 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febdb749-6ac6-4c13-b6cf-7155c7aefe9d" path="/var/lib/kubelet/pods/febdb749-6ac6-4c13-b6cf-7155c7aefe9d/volumes" Feb 02 09:18:55 crc kubenswrapper[4720]: I0202 09:18:55.100950 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:18:55 crc kubenswrapper[4720]: I0202 09:18:55.466591 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10257622-18ee-4e30-9625-328376f9c3f1","Type":"ContainerStarted","Data":"b13e6c3a4a5e9e5769234d47ea651ac360ccebce0bf2c1eb01d3eae6f4b8dc01"} Feb 02 09:18:55 crc kubenswrapper[4720]: I0202 09:18:55.468993 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bd49c2a-a14b-4cbd-a0f9-060ca667f833","Type":"ContainerStarted","Data":"20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384"} Feb 02 09:18:55 crc kubenswrapper[4720]: I0202 09:18:55.469071 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bd49c2a-a14b-4cbd-a0f9-060ca667f833","Type":"ContainerStarted","Data":"e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e"} Feb 02 09:18:55 crc kubenswrapper[4720]: I0202 09:18:55.490465 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.490443589 podStartE2EDuration="2.490443589s" podCreationTimestamp="2026-02-02 09:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:55.484253138 +0000 UTC m=+1369.339878714" watchObservedRunningTime="2026-02-02 09:18:55.490443589 +0000 UTC m=+1369.346069165" Feb 02 09:18:55 crc kubenswrapper[4720]: I0202 09:18:55.574187 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dnfdm"] Feb 02 09:18:55 crc kubenswrapper[4720]: W0202 09:18:55.578296 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd39469b5_2d0c_4ae1_9aab_5ca2027938d9.slice/crio-d87e6109d43b1facf4ddfb1c2a186ad5a5787503f7b9b86b69734802412f1b51 WatchSource:0}: Error finding container d87e6109d43b1facf4ddfb1c2a186ad5a5787503f7b9b86b69734802412f1b51: Status 404 returned error can't find the container with id d87e6109d43b1facf4ddfb1c2a186ad5a5787503f7b9b86b69734802412f1b51 Feb 02 09:18:56 crc kubenswrapper[4720]: I0202 09:18:56.479376 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dnfdm" event={"ID":"d39469b5-2d0c-4ae1-9aab-5ca2027938d9","Type":"ContainerStarted","Data":"43ab3004951c43990c2e2ee9a9b05d3a6593fcde9731567f5f1f8de6fc8c4113"} Feb 02 09:18:56 crc kubenswrapper[4720]: I0202 09:18:56.479661 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dnfdm" event={"ID":"d39469b5-2d0c-4ae1-9aab-5ca2027938d9","Type":"ContainerStarted","Data":"d87e6109d43b1facf4ddfb1c2a186ad5a5787503f7b9b86b69734802412f1b51"} Feb 02 09:18:56 crc kubenswrapper[4720]: I0202 09:18:56.481210 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10257622-18ee-4e30-9625-328376f9c3f1","Type":"ContainerStarted","Data":"9a2ed46540bc009c8e4d574309630916125160a98f573f01577ab471f6ba36ea"} Feb 02 09:18:56 crc kubenswrapper[4720]: I0202 09:18:56.510821 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dnfdm" podStartSLOduration=2.510796144 podStartE2EDuration="2.510796144s" podCreationTimestamp="2026-02-02 09:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:18:56.498054803 +0000 UTC m=+1370.353680369" watchObservedRunningTime="2026-02-02 09:18:56.510796144 +0000 UTC m=+1370.366421710" Feb 02 09:18:56 crc kubenswrapper[4720]: I0202 09:18:56.907234 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.003758 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-czfj4"] Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.004188 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerName="dnsmasq-dns" containerID="cri-o://cdd96a32cabb3eaf367c4ef94ca3c021e8fe916a94cde0f6e98a1fb3ef1f805f" gracePeriod=10 Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.531951 4720 generic.go:334] "Generic (PLEG): container finished" podID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerID="cdd96a32cabb3eaf367c4ef94ca3c021e8fe916a94cde0f6e98a1fb3ef1f805f" exitCode=0 Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.533100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" event={"ID":"f8861dbd-3f3f-4935-9a55-1cb24c812053","Type":"ContainerDied","Data":"cdd96a32cabb3eaf367c4ef94ca3c021e8fe916a94cde0f6e98a1fb3ef1f805f"} Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.697410 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.859520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-config\") pod \"f8861dbd-3f3f-4935-9a55-1cb24c812053\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.859568 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-sb\") pod \"f8861dbd-3f3f-4935-9a55-1cb24c812053\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.859616 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-svc\") pod \"f8861dbd-3f3f-4935-9a55-1cb24c812053\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.859634 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-nb\") pod \"f8861dbd-3f3f-4935-9a55-1cb24c812053\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.860219 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr5vz\" (UniqueName: \"kubernetes.io/projected/f8861dbd-3f3f-4935-9a55-1cb24c812053-kube-api-access-zr5vz\") pod \"f8861dbd-3f3f-4935-9a55-1cb24c812053\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.860362 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-swift-storage-0\") pod \"f8861dbd-3f3f-4935-9a55-1cb24c812053\" (UID: \"f8861dbd-3f3f-4935-9a55-1cb24c812053\") " Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.863932 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8861dbd-3f3f-4935-9a55-1cb24c812053-kube-api-access-zr5vz" (OuterVolumeSpecName: "kube-api-access-zr5vz") pod "f8861dbd-3f3f-4935-9a55-1cb24c812053" (UID: "f8861dbd-3f3f-4935-9a55-1cb24c812053"). InnerVolumeSpecName "kube-api-access-zr5vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.918657 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8861dbd-3f3f-4935-9a55-1cb24c812053" (UID: "f8861dbd-3f3f-4935-9a55-1cb24c812053"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.924328 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8861dbd-3f3f-4935-9a55-1cb24c812053" (UID: "f8861dbd-3f3f-4935-9a55-1cb24c812053"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.927576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8861dbd-3f3f-4935-9a55-1cb24c812053" (UID: "f8861dbd-3f3f-4935-9a55-1cb24c812053"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.929993 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-config" (OuterVolumeSpecName: "config") pod "f8861dbd-3f3f-4935-9a55-1cb24c812053" (UID: "f8861dbd-3f3f-4935-9a55-1cb24c812053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.931155 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f8861dbd-3f3f-4935-9a55-1cb24c812053" (UID: "f8861dbd-3f3f-4935-9a55-1cb24c812053"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.963166 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr5vz\" (UniqueName: \"kubernetes.io/projected/f8861dbd-3f3f-4935-9a55-1cb24c812053-kube-api-access-zr5vz\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.963204 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.963218 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.963227 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.963237 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:57 crc kubenswrapper[4720]: I0202 09:18:57.963245 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8861dbd-3f3f-4935-9a55-1cb24c812053-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.546647 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" event={"ID":"f8861dbd-3f3f-4935-9a55-1cb24c812053","Type":"ContainerDied","Data":"be5cbdeab5c6b20e68eb9a1e8298cab9967096328eae2a673ddb6094c386d666"} Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.546660 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5fbbb8c5-czfj4" Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.547041 4720 scope.go:117] "RemoveContainer" containerID="cdd96a32cabb3eaf367c4ef94ca3c021e8fe916a94cde0f6e98a1fb3ef1f805f" Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.549760 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10257622-18ee-4e30-9625-328376f9c3f1","Type":"ContainerStarted","Data":"4323cd7755fa34b30cb103af202e2307904e8be6ce5b00fc3e38cf25cb3b1c1d"} Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.550156 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.578172 4720 scope.go:117] "RemoveContainer" containerID="479380aab0721bcb0e8b25bbf9080e4a6510f9ba6dd193bf5e2b8be5e4814b5b" Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.637809 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.80102695 podStartE2EDuration="6.637785191s" podCreationTimestamp="2026-02-02 09:18:52 +0000 UTC" firstStartedPulling="2026-02-02 09:18:53.491162929 +0000 UTC m=+1367.346788475" lastFinishedPulling="2026-02-02 09:18:57.32792116 +0000 UTC m=+1371.183546716" observedRunningTime="2026-02-02 09:18:58.59473676 +0000 UTC m=+1372.450362326" watchObservedRunningTime="2026-02-02 09:18:58.637785191 +0000 UTC m=+1372.493410757" Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.668752 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-czfj4"] Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.676979 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5fbbb8c5-czfj4"] Feb 02 09:18:58 crc kubenswrapper[4720]: I0202 09:18:58.900269 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" path="/var/lib/kubelet/pods/f8861dbd-3f3f-4935-9a55-1cb24c812053/volumes" Feb 02 09:19:01 crc kubenswrapper[4720]: I0202 09:19:01.597182 4720 generic.go:334] "Generic (PLEG): container finished" podID="d39469b5-2d0c-4ae1-9aab-5ca2027938d9" containerID="43ab3004951c43990c2e2ee9a9b05d3a6593fcde9731567f5f1f8de6fc8c4113" exitCode=0 Feb 02 09:19:01 crc kubenswrapper[4720]: I0202 09:19:01.597278 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dnfdm" event={"ID":"d39469b5-2d0c-4ae1-9aab-5ca2027938d9","Type":"ContainerDied","Data":"43ab3004951c43990c2e2ee9a9b05d3a6593fcde9731567f5f1f8de6fc8c4113"} Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.088981 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.173792 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-config-data\") pod \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.174099 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-combined-ca-bundle\") pod \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.174242 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtlb5\" (UniqueName: \"kubernetes.io/projected/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-kube-api-access-vtlb5\") pod \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.174300 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-scripts\") pod \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\" (UID: \"d39469b5-2d0c-4ae1-9aab-5ca2027938d9\") " Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.182716 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-scripts" (OuterVolumeSpecName: "scripts") pod "d39469b5-2d0c-4ae1-9aab-5ca2027938d9" (UID: "d39469b5-2d0c-4ae1-9aab-5ca2027938d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.183263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-kube-api-access-vtlb5" (OuterVolumeSpecName: "kube-api-access-vtlb5") pod "d39469b5-2d0c-4ae1-9aab-5ca2027938d9" (UID: "d39469b5-2d0c-4ae1-9aab-5ca2027938d9"). InnerVolumeSpecName "kube-api-access-vtlb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.211277 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d39469b5-2d0c-4ae1-9aab-5ca2027938d9" (UID: "d39469b5-2d0c-4ae1-9aab-5ca2027938d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.219746 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-config-data" (OuterVolumeSpecName: "config-data") pod "d39469b5-2d0c-4ae1-9aab-5ca2027938d9" (UID: "d39469b5-2d0c-4ae1-9aab-5ca2027938d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.284163 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.284213 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.284234 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtlb5\" (UniqueName: \"kubernetes.io/projected/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-kube-api-access-vtlb5\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.284245 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39469b5-2d0c-4ae1-9aab-5ca2027938d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.615672 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dnfdm" event={"ID":"d39469b5-2d0c-4ae1-9aab-5ca2027938d9","Type":"ContainerDied","Data":"d87e6109d43b1facf4ddfb1c2a186ad5a5787503f7b9b86b69734802412f1b51"} Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.615983 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87e6109d43b1facf4ddfb1c2a186ad5a5787503f7b9b86b69734802412f1b51" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.615754 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dnfdm" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.819500 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.821428 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.826952 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.827294 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" containerName="nova-scheduler-scheduler" containerID="cri-o://8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" gracePeriod=30 Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.849697 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.901907 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.902219 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-log" containerID="cri-o://b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c" gracePeriod=30 Feb 02 09:19:03 crc kubenswrapper[4720]: I0202 09:19:03.902262 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-metadata" containerID="cri-o://6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd" gracePeriod=30 Feb 02 09:19:04 crc kubenswrapper[4720]: I0202 09:19:04.628264 4720 generic.go:334] "Generic (PLEG): container finished" podID="92598ee6-7272-4e72-9616-60308a02970a" containerID="b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c" exitCode=143 Feb 02 09:19:04 crc kubenswrapper[4720]: I0202 09:19:04.629361 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92598ee6-7272-4e72-9616-60308a02970a","Type":"ContainerDied","Data":"b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c"} Feb 02 09:19:04 crc kubenswrapper[4720]: I0202 09:19:04.834042 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:04 crc kubenswrapper[4720]: I0202 09:19:04.834034 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:05 crc kubenswrapper[4720]: E0202 09:19:05.451835 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 09:19:05 crc kubenswrapper[4720]: E0202 09:19:05.456246 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 09:19:05 crc kubenswrapper[4720]: E0202 09:19:05.457925 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 09:19:05 crc kubenswrapper[4720]: E0202 09:19:05.457992 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" containerName="nova-scheduler-scheduler" Feb 02 09:19:05 crc kubenswrapper[4720]: I0202 09:19:05.635765 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-log" containerID="cri-o://e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e" gracePeriod=30 Feb 02 09:19:05 crc kubenswrapper[4720]: I0202 09:19:05.635901 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-api" containerID="cri-o://20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384" gracePeriod=30 Feb 02 09:19:06 crc kubenswrapper[4720]: I0202 09:19:06.646182 4720 generic.go:334] "Generic (PLEG): container finished" podID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerID="e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e" exitCode=143 Feb 02 09:19:06 crc kubenswrapper[4720]: I0202 09:19:06.646301 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bd49c2a-a14b-4cbd-a0f9-060ca667f833","Type":"ContainerDied","Data":"e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e"} Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.544019 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.584068 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-nova-metadata-tls-certs\") pod \"92598ee6-7272-4e72-9616-60308a02970a\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.584191 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdxk2\" (UniqueName: \"kubernetes.io/projected/92598ee6-7272-4e72-9616-60308a02970a-kube-api-access-tdxk2\") pod \"92598ee6-7272-4e72-9616-60308a02970a\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.584238 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-combined-ca-bundle\") pod \"92598ee6-7272-4e72-9616-60308a02970a\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.584282 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-config-data\") pod \"92598ee6-7272-4e72-9616-60308a02970a\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.584418 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92598ee6-7272-4e72-9616-60308a02970a-logs\") pod \"92598ee6-7272-4e72-9616-60308a02970a\" (UID: \"92598ee6-7272-4e72-9616-60308a02970a\") " Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.585998 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92598ee6-7272-4e72-9616-60308a02970a-logs" (OuterVolumeSpecName: "logs") pod "92598ee6-7272-4e72-9616-60308a02970a" (UID: "92598ee6-7272-4e72-9616-60308a02970a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.596264 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92598ee6-7272-4e72-9616-60308a02970a-kube-api-access-tdxk2" (OuterVolumeSpecName: "kube-api-access-tdxk2") pod "92598ee6-7272-4e72-9616-60308a02970a" (UID: "92598ee6-7272-4e72-9616-60308a02970a"). InnerVolumeSpecName "kube-api-access-tdxk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.630203 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92598ee6-7272-4e72-9616-60308a02970a" (UID: "92598ee6-7272-4e72-9616-60308a02970a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.662025 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "92598ee6-7272-4e72-9616-60308a02970a" (UID: "92598ee6-7272-4e72-9616-60308a02970a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.663624 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-config-data" (OuterVolumeSpecName: "config-data") pod "92598ee6-7272-4e72-9616-60308a02970a" (UID: "92598ee6-7272-4e72-9616-60308a02970a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.663909 4720 generic.go:334] "Generic (PLEG): container finished" podID="92598ee6-7272-4e72-9616-60308a02970a" containerID="6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd" exitCode=0 Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.664011 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92598ee6-7272-4e72-9616-60308a02970a","Type":"ContainerDied","Data":"6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd"} Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.664110 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92598ee6-7272-4e72-9616-60308a02970a","Type":"ContainerDied","Data":"97a61fc0bebd995ad9d8929ea322f4763e0d394d61fcc7226f8a9b6e1679a1f0"} Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.664195 4720 scope.go:117] "RemoveContainer" containerID="6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.664401 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.687103 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.687142 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92598ee6-7272-4e72-9616-60308a02970a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.687157 4720 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.687171 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdxk2\" (UniqueName: \"kubernetes.io/projected/92598ee6-7272-4e72-9616-60308a02970a-kube-api-access-tdxk2\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.687186 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92598ee6-7272-4e72-9616-60308a02970a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.703105 4720 scope.go:117] "RemoveContainer" containerID="b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.705595 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.722155 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.731462 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.731921 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerName="init" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.731937 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerName="init" Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.731957 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39469b5-2d0c-4ae1-9aab-5ca2027938d9" containerName="nova-manage" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.731964 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39469b5-2d0c-4ae1-9aab-5ca2027938d9" containerName="nova-manage" Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.731980 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerName="dnsmasq-dns" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.731986 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerName="dnsmasq-dns" Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.731997 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-log" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.732003 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-log" Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.732035 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-metadata" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.732043 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-metadata" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.732260 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39469b5-2d0c-4ae1-9aab-5ca2027938d9" containerName="nova-manage" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.732274 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-metadata" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.732290 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8861dbd-3f3f-4935-9a55-1cb24c812053" containerName="dnsmasq-dns" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.732307 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-log" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.733253 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.737903 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.738099 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.742483 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.754674 4720 scope.go:117] "RemoveContainer" containerID="6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd" Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.759092 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd\": container with ID starting with 6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd not found: ID does not exist" containerID="6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.759136 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd"} err="failed to get container status \"6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd\": rpc error: code = NotFound desc = could not find container \"6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd\": container with ID starting with 6f830e6a08308e3418016043b3d5e3012d90227e211390fa96fb3a2f5637abbd not found: ID does not exist" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.759172 4720 scope.go:117] "RemoveContainer" containerID="b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c" Feb 02 09:19:07 crc kubenswrapper[4720]: E0202 09:19:07.759578 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c\": container with ID starting with b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c not found: ID does not exist" containerID="b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.759607 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c"} err="failed to get container status \"b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c\": rpc error: code = NotFound desc = could not find container \"b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c\": container with ID starting with b041062255065b7cb84ff4307303f7a322af0fa0c2dab7184b51c8d43f64fd3c not found: ID does not exist" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.790477 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-config-data\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.790511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.790528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.790821 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-logs\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.791281 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsb9q\" (UniqueName: \"kubernetes.io/projected/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-kube-api-access-xsb9q\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.893040 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsb9q\" (UniqueName: \"kubernetes.io/projected/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-kube-api-access-xsb9q\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.893355 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-config-data\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.893428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.893496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.893654 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-logs\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.894073 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-logs\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.897749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.900109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-config-data\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.901699 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:07 crc kubenswrapper[4720]: I0202 09:19:07.910571 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsb9q\" (UniqueName: \"kubernetes.io/projected/7f33c374-23ce-4cf0-a453-b63ae0d2cb1a-kube-api-access-xsb9q\") pod \"nova-metadata-0\" (UID: \"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a\") " pod="openstack/nova-metadata-0" Feb 02 09:19:08 crc kubenswrapper[4720]: I0202 09:19:08.064940 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 09:19:08 crc kubenswrapper[4720]: I0202 09:19:08.564949 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 09:19:08 crc kubenswrapper[4720]: I0202 09:19:08.675372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a","Type":"ContainerStarted","Data":"1a716af39913156b138b90f9b2e19b95efc7d522c3137c6eb64022dfa0158423"} Feb 02 09:19:08 crc kubenswrapper[4720]: I0202 09:19:08.906529 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92598ee6-7272-4e72-9616-60308a02970a" path="/var/lib/kubelet/pods/92598ee6-7272-4e72-9616-60308a02970a/volumes" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.636286 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.691006 4720 generic.go:334] "Generic (PLEG): container finished" podID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" exitCode=0 Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.691074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c","Type":"ContainerDied","Data":"8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7"} Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.691107 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c","Type":"ContainerDied","Data":"136b8ae6c3c90d7c9eb98751fa1e4320cd4431db97a2bb698213510a9da2534c"} Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.691123 4720 scope.go:117] "RemoveContainer" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.691237 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.699495 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a","Type":"ContainerStarted","Data":"373adf76212f172fd2408000fa2443808c5ff3797fa9072941d13efae1efe5c8"} Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.699539 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f33c374-23ce-4cf0-a453-b63ae0d2cb1a","Type":"ContainerStarted","Data":"591e8388280058b9d067da4cbdb5cef6e14d6900effb612a108cdcd07e8e6c2b"} Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.720844 4720 scope.go:117] "RemoveContainer" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" Feb 02 09:19:09 crc kubenswrapper[4720]: E0202 09:19:09.722570 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7\": container with ID starting with 8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7 not found: ID does not exist" containerID="8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.722643 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7"} err="failed to get container status \"8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7\": rpc error: code = NotFound desc = could not find container \"8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7\": container with ID starting with 8d114148cfa7b81951cd721d074e8bacd67da81b274b21064f2ef62b8cf1eae7 not found: ID does not exist" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.734419 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.734385395 podStartE2EDuration="2.734385395s" podCreationTimestamp="2026-02-02 09:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:19:09.731931445 +0000 UTC m=+1383.587557011" watchObservedRunningTime="2026-02-02 09:19:09.734385395 +0000 UTC m=+1383.590011001" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.739719 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-config-data\") pod \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.739945 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-combined-ca-bundle\") pod \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.740128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4t29\" (UniqueName: \"kubernetes.io/projected/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-kube-api-access-n4t29\") pod \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\" (UID: \"e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c\") " Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.747165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-kube-api-access-n4t29" (OuterVolumeSpecName: "kube-api-access-n4t29") pod "e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" (UID: "e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c"). InnerVolumeSpecName "kube-api-access-n4t29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.780926 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-config-data" (OuterVolumeSpecName: "config-data") pod "e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" (UID: "e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.783663 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" (UID: "e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.842195 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4t29\" (UniqueName: \"kubernetes.io/projected/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-kube-api-access-n4t29\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.842230 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:09 crc kubenswrapper[4720]: I0202 09:19:09.842244 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.032364 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.039968 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.060803 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:19:10 crc kubenswrapper[4720]: E0202 09:19:10.061251 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" containerName="nova-scheduler-scheduler" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.061268 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" containerName="nova-scheduler-scheduler" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.061467 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" containerName="nova-scheduler-scheduler" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.062078 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.064234 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.082659 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.148996 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1215d777-66de-494f-9017-6d859aa3d120-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.149240 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5jp\" (UniqueName: \"kubernetes.io/projected/1215d777-66de-494f-9017-6d859aa3d120-kube-api-access-4b5jp\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.149263 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1215d777-66de-494f-9017-6d859aa3d120-config-data\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.251751 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1215d777-66de-494f-9017-6d859aa3d120-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.251837 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b5jp\" (UniqueName: \"kubernetes.io/projected/1215d777-66de-494f-9017-6d859aa3d120-kube-api-access-4b5jp\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.251906 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1215d777-66de-494f-9017-6d859aa3d120-config-data\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.257660 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1215d777-66de-494f-9017-6d859aa3d120-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.257706 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1215d777-66de-494f-9017-6d859aa3d120-config-data\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.271828 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b5jp\" (UniqueName: \"kubernetes.io/projected/1215d777-66de-494f-9017-6d859aa3d120-kube-api-access-4b5jp\") pod \"nova-scheduler-0\" (UID: \"1215d777-66de-494f-9017-6d859aa3d120\") " pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.535092 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.572199 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.657780 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-config-data\") pod \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.657868 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-logs\") pod \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.658002 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-internal-tls-certs\") pod \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.658040 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-combined-ca-bundle\") pod \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.658104 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-public-tls-certs\") pod \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.658144 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvgfq\" (UniqueName: \"kubernetes.io/projected/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-kube-api-access-cvgfq\") pod \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\" (UID: \"5bd49c2a-a14b-4cbd-a0f9-060ca667f833\") " Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.658542 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-logs" (OuterVolumeSpecName: "logs") pod "5bd49c2a-a14b-4cbd-a0f9-060ca667f833" (UID: "5bd49c2a-a14b-4cbd-a0f9-060ca667f833"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.665609 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-kube-api-access-cvgfq" (OuterVolumeSpecName: "kube-api-access-cvgfq") pod "5bd49c2a-a14b-4cbd-a0f9-060ca667f833" (UID: "5bd49c2a-a14b-4cbd-a0f9-060ca667f833"). InnerVolumeSpecName "kube-api-access-cvgfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.683845 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bd49c2a-a14b-4cbd-a0f9-060ca667f833" (UID: "5bd49c2a-a14b-4cbd-a0f9-060ca667f833"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.691425 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-config-data" (OuterVolumeSpecName: "config-data") pod "5bd49c2a-a14b-4cbd-a0f9-060ca667f833" (UID: "5bd49c2a-a14b-4cbd-a0f9-060ca667f833"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.714512 4720 generic.go:334] "Generic (PLEG): container finished" podID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerID="20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384" exitCode=0 Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.714602 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bd49c2a-a14b-4cbd-a0f9-060ca667f833","Type":"ContainerDied","Data":"20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384"} Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.714637 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5bd49c2a-a14b-4cbd-a0f9-060ca667f833","Type":"ContainerDied","Data":"f7862cc093da570fbc68c893b80d82e009c28163b730135ca6063dea2d517eb1"} Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.714664 4720 scope.go:117] "RemoveContainer" containerID="20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.715324 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.721332 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5bd49c2a-a14b-4cbd-a0f9-060ca667f833" (UID: "5bd49c2a-a14b-4cbd-a0f9-060ca667f833"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.723663 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5bd49c2a-a14b-4cbd-a0f9-060ca667f833" (UID: "5bd49c2a-a14b-4cbd-a0f9-060ca667f833"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.740188 4720 scope.go:117] "RemoveContainer" containerID="e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.760778 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.760960 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.761017 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvgfq\" (UniqueName: \"kubernetes.io/projected/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-kube-api-access-cvgfq\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.761067 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.761116 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-logs\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.761182 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bd49c2a-a14b-4cbd-a0f9-060ca667f833-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.762836 4720 scope.go:117] "RemoveContainer" containerID="20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384" Feb 02 09:19:10 crc kubenswrapper[4720]: E0202 09:19:10.763431 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384\": container with ID starting with 20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384 not found: ID does not exist" containerID="20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.763494 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384"} err="failed to get container status \"20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384\": rpc error: code = NotFound desc = could not find container \"20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384\": container with ID starting with 20c50cf540084f580e1645d136f6fe8f37640e5ad4f68fbd3a00b3a8b8fd2384 not found: ID does not exist" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.763617 4720 scope.go:117] "RemoveContainer" containerID="e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e" Feb 02 09:19:10 crc kubenswrapper[4720]: E0202 09:19:10.764322 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e\": container with ID starting with e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e not found: ID does not exist" containerID="e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.764378 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e"} err="failed to get container status \"e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e\": rpc error: code = NotFound desc = could not find container \"e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e\": container with ID starting with e302541718027429ee4a188a4bf042956ef1567a7d907215c16e0f8550b66b7e not found: ID does not exist" Feb 02 09:19:10 crc kubenswrapper[4720]: I0202 09:19:10.906341 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c" path="/var/lib/kubelet/pods/e37a1c88-77d0-44bc-a53c-bf98e4bd8f7c/volumes" Feb 02 09:19:11 crc kubenswrapper[4720]: W0202 09:19:11.008715 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1215d777_66de_494f_9017_6d859aa3d120.slice/crio-72187fb656cd672684b5499b22979d503dd481c529fcc1b999d2329d0b1147f0 WatchSource:0}: Error finding container 72187fb656cd672684b5499b22979d503dd481c529fcc1b999d2329d0b1147f0: Status 404 returned error can't find the container with id 72187fb656cd672684b5499b22979d503dd481c529fcc1b999d2329d0b1147f0 Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.008966 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.057004 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.078815 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.090017 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 09:19:11 crc kubenswrapper[4720]: E0202 09:19:11.097464 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-log" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.097509 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-log" Feb 02 09:19:11 crc kubenswrapper[4720]: E0202 09:19:11.097553 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-api" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.097562 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-api" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.097927 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-log" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.097956 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" containerName="nova-api-api" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.099257 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.103079 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.103134 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.103861 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.107187 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.167999 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn6xt\" (UniqueName: \"kubernetes.io/projected/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-kube-api-access-cn6xt\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.168080 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.168118 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-config-data\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.168168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-logs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.168237 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.168252 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.271009 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.271052 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.271131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn6xt\" (UniqueName: \"kubernetes.io/projected/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-kube-api-access-cn6xt\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.271231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.271781 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-config-data\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.271852 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-logs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.272425 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-logs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.275353 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.278726 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-config-data\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.279320 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.280194 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.289502 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn6xt\" (UniqueName: \"kubernetes.io/projected/ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b-kube-api-access-cn6xt\") pod \"nova-api-0\" (UID: \"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b\") " pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.528236 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.730831 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1215d777-66de-494f-9017-6d859aa3d120","Type":"ContainerStarted","Data":"e9ec0369e737593a08bff8416a67b599ab51582cf43c4b7c033e23e876a6bb95"} Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.731097 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1215d777-66de-494f-9017-6d859aa3d120","Type":"ContainerStarted","Data":"72187fb656cd672684b5499b22979d503dd481c529fcc1b999d2329d0b1147f0"} Feb 02 09:19:11 crc kubenswrapper[4720]: I0202 09:19:11.776610 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.776580272 podStartE2EDuration="1.776580272s" podCreationTimestamp="2026-02-02 09:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:19:11.753600172 +0000 UTC m=+1385.609225718" watchObservedRunningTime="2026-02-02 09:19:11.776580272 +0000 UTC m=+1385.632205828" Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.025421 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.375905 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.375938 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="92598ee6-7272-4e72-9616-60308a02970a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.743113 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b","Type":"ContainerStarted","Data":"071a85bbd1f47aadbb8ff2136c87c8979f53ecb71f874cdf042c19bd36471e6f"} Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.744481 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b","Type":"ContainerStarted","Data":"d530465a27469d915675ce1d3348d58b0604e366b29d53a055995ded6ec47347"} Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.744595 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b","Type":"ContainerStarted","Data":"f23d4eb7026a25be4e84913b4bf79279312e140ef93ef645342cb59bfa971945"} Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.777118 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.777097644 podStartE2EDuration="1.777097644s" podCreationTimestamp="2026-02-02 09:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:19:12.768545094 +0000 UTC m=+1386.624170650" watchObservedRunningTime="2026-02-02 09:19:12.777097644 +0000 UTC m=+1386.632723200" Feb 02 09:19:12 crc kubenswrapper[4720]: I0202 09:19:12.901037 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd49c2a-a14b-4cbd-a0f9-060ca667f833" path="/var/lib/kubelet/pods/5bd49c2a-a14b-4cbd-a0f9-060ca667f833/volumes" Feb 02 09:19:13 crc kubenswrapper[4720]: I0202 09:19:13.066036 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 09:19:13 crc kubenswrapper[4720]: I0202 09:19:13.066407 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 09:19:15 crc kubenswrapper[4720]: I0202 09:19:15.538829 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 09:19:18 crc kubenswrapper[4720]: I0202 09:19:18.065054 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 09:19:18 crc kubenswrapper[4720]: I0202 09:19:18.065835 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 09:19:19 crc kubenswrapper[4720]: I0202 09:19:19.079043 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7f33c374-23ce-4cf0-a453-b63ae0d2cb1a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:19 crc kubenswrapper[4720]: I0202 09:19:19.079066 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7f33c374-23ce-4cf0-a453-b63ae0d2cb1a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:20 crc kubenswrapper[4720]: I0202 09:19:20.535283 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 09:19:20 crc kubenswrapper[4720]: I0202 09:19:20.587816 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 09:19:20 crc kubenswrapper[4720]: I0202 09:19:20.927481 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 09:19:21 crc kubenswrapper[4720]: I0202 09:19:21.533623 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:19:21 crc kubenswrapper[4720]: I0202 09:19:21.534110 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 09:19:22 crc kubenswrapper[4720]: I0202 09:19:22.542041 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:22 crc kubenswrapper[4720]: I0202 09:19:22.542059 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 09:19:22 crc kubenswrapper[4720]: I0202 09:19:22.975008 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 09:19:28 crc kubenswrapper[4720]: I0202 09:19:28.072129 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 09:19:28 crc kubenswrapper[4720]: I0202 09:19:28.074698 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 09:19:28 crc kubenswrapper[4720]: I0202 09:19:28.080915 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 09:19:28 crc kubenswrapper[4720]: I0202 09:19:28.965124 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 09:19:31 crc kubenswrapper[4720]: I0202 09:19:31.543569 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 09:19:31 crc kubenswrapper[4720]: I0202 09:19:31.545658 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 09:19:31 crc kubenswrapper[4720]: I0202 09:19:31.547826 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 09:19:31 crc kubenswrapper[4720]: I0202 09:19:31.563340 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 09:19:31 crc kubenswrapper[4720]: I0202 09:19:31.986769 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 09:19:31 crc kubenswrapper[4720]: I0202 09:19:31.996155 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 09:19:39 crc kubenswrapper[4720]: I0202 09:19:39.839338 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:19:40 crc kubenswrapper[4720]: I0202 09:19:40.697543 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:19:44 crc kubenswrapper[4720]: I0202 09:19:44.334197 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerName="rabbitmq" containerID="cri-o://5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505" gracePeriod=604796 Feb 02 09:19:45 crc kubenswrapper[4720]: I0202 09:19:45.429328 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="527ad190-1f46-4b04-8379-72f150ba294d" containerName="rabbitmq" containerID="cri-o://fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d" gracePeriod=604796 Feb 02 09:19:47 crc kubenswrapper[4720]: I0202 09:19:47.902489 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:19:47 crc kubenswrapper[4720]: I0202 09:19:47.903015 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.032870 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-plugins\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144692 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-erlang-cookie-secret\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144748 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rm5\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-kube-api-access-g4rm5\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144870 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-config-data\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144930 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-plugins-conf\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144956 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-confd\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.144991 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-tls\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.145025 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.145088 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-erlang-cookie\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.145116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-pod-info\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.145144 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-server-conf\") pod \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\" (UID: \"5cda7a8a-d405-4c4f-b8c2-bf75323634b9\") " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.147194 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.148244 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.149596 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.153152 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.154534 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-kube-api-access-g4rm5" (OuterVolumeSpecName: "kube-api-access-g4rm5") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "kube-api-access-g4rm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.156176 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.156700 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-pod-info" (OuterVolumeSpecName: "pod-info") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.172699 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.200750 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-config-data" (OuterVolumeSpecName: "config-data") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.222336 4720 generic.go:334] "Generic (PLEG): container finished" podID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerID="5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505" exitCode=0 Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.222386 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cda7a8a-d405-4c4f-b8c2-bf75323634b9","Type":"ContainerDied","Data":"5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505"} Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.222505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cda7a8a-d405-4c4f-b8c2-bf75323634b9","Type":"ContainerDied","Data":"1398a8e90971e6dccf9e22f0ac93965cef33c70b553daed87bdf37223af6d8ec"} Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.222524 4720 scope.go:117] "RemoveContainer" containerID="5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.222427 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.237278 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-server-conf" (OuterVolumeSpecName: "server-conf") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249193 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249391 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249413 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249426 4720 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249437 4720 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249448 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249479 4720 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249488 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rm5\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-kube-api-access-g4rm5\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249498 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.249513 4720 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.274295 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.286235 4720 scope.go:117] "RemoveContainer" containerID="bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.308422 4720 scope.go:117] "RemoveContainer" containerID="5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505" Feb 02 09:19:51 crc kubenswrapper[4720]: E0202 09:19:51.308870 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505\": container with ID starting with 5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505 not found: ID does not exist" containerID="5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.308930 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505"} err="failed to get container status \"5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505\": rpc error: code = NotFound desc = could not find container \"5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505\": container with ID starting with 5352eefc5510c2e487034d02c27d85f4023cb6045e13b3ed80b6aab4dddd6505 not found: ID does not exist" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.308959 4720 scope.go:117] "RemoveContainer" containerID="bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.308950 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5cda7a8a-d405-4c4f-b8c2-bf75323634b9" (UID: "5cda7a8a-d405-4c4f-b8c2-bf75323634b9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:51 crc kubenswrapper[4720]: E0202 09:19:51.309329 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369\": container with ID starting with bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369 not found: ID does not exist" containerID="bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.309367 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369"} err="failed to get container status \"bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369\": rpc error: code = NotFound desc = could not find container \"bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369\": container with ID starting with bac48800fa98a06a9c694b63359d952052ed410a56ab01bdc09ff63629064369 not found: ID does not exist" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.351116 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cda7a8a-d405-4c4f-b8c2-bf75323634b9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.351389 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.563940 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.572187 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.584821 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:19:51 crc kubenswrapper[4720]: E0202 09:19:51.585226 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerName="setup-container" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.585243 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerName="setup-container" Feb 02 09:19:51 crc kubenswrapper[4720]: E0202 09:19:51.585276 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerName="rabbitmq" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.585283 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerName="rabbitmq" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.585453 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" containerName="rabbitmq" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.586446 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.590223 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.590489 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.590650 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.590805 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.590971 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b9tk4" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.593190 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.593205 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.599975 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759252 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjwq\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-kube-api-access-lpjwq\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759366 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5123a4f9-6161-445e-a17c-184cfbe9c4bb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759394 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5123a4f9-6161-445e-a17c-184cfbe9c4bb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759470 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759490 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759580 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759611 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.759628 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-config-data\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.861761 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.863105 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.863040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.863653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-config-data\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.863918 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864112 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjwq\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-kube-api-access-lpjwq\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864357 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5123a4f9-6161-445e-a17c-184cfbe9c4bb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864414 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5123a4f9-6161-445e-a17c-184cfbe9c4bb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864464 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864227 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-config-data\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864854 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.864931 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.865090 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.866396 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5123a4f9-6161-445e-a17c-184cfbe9c4bb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.869379 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5123a4f9-6161-445e-a17c-184cfbe9c4bb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.869427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.869653 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5123a4f9-6161-445e-a17c-184cfbe9c4bb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.889834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.892186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjwq\" (UniqueName: \"kubernetes.io/projected/5123a4f9-6161-445e-a17c-184cfbe9c4bb-kube-api-access-lpjwq\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:51 crc kubenswrapper[4720]: I0202 09:19:51.925565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5123a4f9-6161-445e-a17c-184cfbe9c4bb\") " pod="openstack/rabbitmq-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.022079 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.143846 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.236850 4720 generic.go:334] "Generic (PLEG): container finished" podID="527ad190-1f46-4b04-8379-72f150ba294d" containerID="fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d" exitCode=0 Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.237229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"527ad190-1f46-4b04-8379-72f150ba294d","Type":"ContainerDied","Data":"fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d"} Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.237256 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"527ad190-1f46-4b04-8379-72f150ba294d","Type":"ContainerDied","Data":"21475c33bd904e25dfe8eb9ffde56cefb81552f81c640fe7f50566e4a016d55b"} Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.237276 4720 scope.go:117] "RemoveContainer" containerID="fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.237408 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.263789 4720 scope.go:117] "RemoveContainer" containerID="3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275078 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-config-data\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275135 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-tls\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275176 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/527ad190-1f46-4b04-8379-72f150ba294d-erlang-cookie-secret\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275216 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-confd\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275241 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/527ad190-1f46-4b04-8379-72f150ba294d-pod-info\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275286 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-erlang-cookie\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275331 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-plugins-conf\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-server-conf\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275455 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-plugins\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.275498 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.276010 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.276067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.276451 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.276520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6dwl\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-kube-api-access-m6dwl\") pod \"527ad190-1f46-4b04-8379-72f150ba294d\" (UID: \"527ad190-1f46-4b04-8379-72f150ba294d\") " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.278058 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.278083 4720 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.278095 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.285174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527ad190-1f46-4b04-8379-72f150ba294d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.288103 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.289093 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/527ad190-1f46-4b04-8379-72f150ba294d-pod-info" (OuterVolumeSpecName: "pod-info") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.289218 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-kube-api-access-m6dwl" (OuterVolumeSpecName: "kube-api-access-m6dwl") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "kube-api-access-m6dwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.290985 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.307550 4720 scope.go:117] "RemoveContainer" containerID="fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d" Feb 02 09:19:52 crc kubenswrapper[4720]: E0202 09:19:52.308103 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d\": container with ID starting with fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d not found: ID does not exist" containerID="fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.308134 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d"} err="failed to get container status \"fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d\": rpc error: code = NotFound desc = could not find container \"fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d\": container with ID starting with fcbcf98650632e25837a7100c9e3a606da2ba29ef08886c3180669f2e938260d not found: ID does not exist" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.308200 4720 scope.go:117] "RemoveContainer" containerID="3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.309969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-config-data" (OuterVolumeSpecName: "config-data") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: E0202 09:19:52.315869 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1\": container with ID starting with 3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1 not found: ID does not exist" containerID="3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.315915 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1"} err="failed to get container status \"3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1\": rpc error: code = NotFound desc = could not find container \"3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1\": container with ID starting with 3c801e3b4c1fc7faaec74e1dfd0977236bc727053da374472fbaf6ca650582a1 not found: ID does not exist" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.344110 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-server-conf" (OuterVolumeSpecName: "server-conf") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.379460 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.379695 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.379761 4720 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/527ad190-1f46-4b04-8379-72f150ba294d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.379817 4720 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/527ad190-1f46-4b04-8379-72f150ba294d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.379872 4720 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/527ad190-1f46-4b04-8379-72f150ba294d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.380052 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.380143 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6dwl\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-kube-api-access-m6dwl\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.396793 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "527ad190-1f46-4b04-8379-72f150ba294d" (UID: "527ad190-1f46-4b04-8379-72f150ba294d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.405844 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.481720 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.481835 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/527ad190-1f46-4b04-8379-72f150ba294d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.496616 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.707412 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.719707 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.733569 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:19:52 crc kubenswrapper[4720]: E0202 09:19:52.734288 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ad190-1f46-4b04-8379-72f150ba294d" containerName="rabbitmq" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.734367 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ad190-1f46-4b04-8379-72f150ba294d" containerName="rabbitmq" Feb 02 09:19:52 crc kubenswrapper[4720]: E0202 09:19:52.734478 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ad190-1f46-4b04-8379-72f150ba294d" containerName="setup-container" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.734543 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ad190-1f46-4b04-8379-72f150ba294d" containerName="setup-container" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.734843 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="527ad190-1f46-4b04-8379-72f150ba294d" containerName="rabbitmq" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.736021 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.738596 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.739491 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.739556 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.739575 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.739600 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.739651 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4p7fg" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.739681 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.753126 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.889374 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25c7\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-kube-api-access-g25c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.889749 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.889870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890014 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52efc47f-bb34-4935-9b64-94e52a883272-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890123 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890244 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890354 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890506 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.890969 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52efc47f-bb34-4935-9b64-94e52a883272-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.899256 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527ad190-1f46-4b04-8379-72f150ba294d" path="/var/lib/kubelet/pods/527ad190-1f46-4b04-8379-72f150ba294d/volumes" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.900326 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cda7a8a-d405-4c4f-b8c2-bf75323634b9" path="/var/lib/kubelet/pods/5cda7a8a-d405-4c4f-b8c2-bf75323634b9/volumes" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.992899 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993010 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993100 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52efc47f-bb34-4935-9b64-94e52a883272-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993160 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993350 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993387 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.993529 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.994419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.994464 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.994588 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.994640 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52efc47f-bb34-4935-9b64-94e52a883272-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.994693 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.994732 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25c7\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-kube-api-access-g25c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.995158 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.995169 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52efc47f-bb34-4935-9b64-94e52a883272-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.998712 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52efc47f-bb34-4935-9b64-94e52a883272-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:52 crc kubenswrapper[4720]: I0202 09:19:52.998969 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.000934 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.010366 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52efc47f-bb34-4935-9b64-94e52a883272-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.017814 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25c7\" (UniqueName: \"kubernetes.io/projected/52efc47f-bb34-4935-9b64-94e52a883272-kube-api-access-g25c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.041655 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52efc47f-bb34-4935-9b64-94e52a883272\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.055400 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.131698 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-759799d765-fz4nc"] Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.133484 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.137350 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.145493 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759799d765-fz4nc"] Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.251172 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5123a4f9-6161-445e-a17c-184cfbe9c4bb","Type":"ContainerStarted","Data":"b8d4a08ef55af3d8bd537fe67577a991ed209cf0acac8bbe719909b722e26b05"} Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.301492 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-sb\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.301525 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-openstack-edpm-ipam\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.301568 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-config\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.301734 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhsjj\" (UniqueName: \"kubernetes.io/projected/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-kube-api-access-nhsjj\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.302071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-nb\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.302128 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-swift-storage-0\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.302197 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-svc\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-sb\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404278 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-openstack-edpm-ipam\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404325 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-config\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404367 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhsjj\" (UniqueName: \"kubernetes.io/projected/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-kube-api-access-nhsjj\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-nb\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404456 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-swift-storage-0\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-svc\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.404939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-sb\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.405269 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-svc\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.405420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-nb\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.405578 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-openstack-edpm-ipam\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.406033 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-swift-storage-0\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.406077 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-config\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.423963 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhsjj\" (UniqueName: \"kubernetes.io/projected/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-kube-api-access-nhsjj\") pod \"dnsmasq-dns-759799d765-fz4nc\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.502432 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:53 crc kubenswrapper[4720]: I0202 09:19:53.647163 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 09:19:54 crc kubenswrapper[4720]: I0202 09:19:54.041579 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759799d765-fz4nc"] Feb 02 09:19:54 crc kubenswrapper[4720]: W0202 09:19:54.047080 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff197c22_b3b8_4c01_bcbc_0d1a6660844a.slice/crio-95d2456fbb84ac8859c3159d1ae77fc5f858bee15530681040150594b960ac2e WatchSource:0}: Error finding container 95d2456fbb84ac8859c3159d1ae77fc5f858bee15530681040150594b960ac2e: Status 404 returned error can't find the container with id 95d2456fbb84ac8859c3159d1ae77fc5f858bee15530681040150594b960ac2e Feb 02 09:19:54 crc kubenswrapper[4720]: I0202 09:19:54.270128 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-fz4nc" event={"ID":"ff197c22-b3b8-4c01-bcbc-0d1a6660844a","Type":"ContainerStarted","Data":"f220064efe04b898f240b13ce7fb2e1607e53f5a7a3a5b63acddae809a77aef8"} Feb 02 09:19:54 crc kubenswrapper[4720]: I0202 09:19:54.270708 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-fz4nc" event={"ID":"ff197c22-b3b8-4c01-bcbc-0d1a6660844a","Type":"ContainerStarted","Data":"95d2456fbb84ac8859c3159d1ae77fc5f858bee15530681040150594b960ac2e"} Feb 02 09:19:54 crc kubenswrapper[4720]: I0202 09:19:54.272747 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5123a4f9-6161-445e-a17c-184cfbe9c4bb","Type":"ContainerStarted","Data":"9bc7174ba3d3268541ea4948210346ba70b49b4e42a2fa520ff1ee42ad619d12"} Feb 02 09:19:54 crc kubenswrapper[4720]: I0202 09:19:54.274581 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52efc47f-bb34-4935-9b64-94e52a883272","Type":"ContainerStarted","Data":"751c0150e9e9b3dd62125a1d45b14548700beee673f86c86e7374e0cd4a71c16"} Feb 02 09:19:55 crc kubenswrapper[4720]: I0202 09:19:55.288000 4720 generic.go:334] "Generic (PLEG): container finished" podID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerID="f220064efe04b898f240b13ce7fb2e1607e53f5a7a3a5b63acddae809a77aef8" exitCode=0 Feb 02 09:19:55 crc kubenswrapper[4720]: I0202 09:19:55.291291 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-fz4nc" event={"ID":"ff197c22-b3b8-4c01-bcbc-0d1a6660844a","Type":"ContainerDied","Data":"f220064efe04b898f240b13ce7fb2e1607e53f5a7a3a5b63acddae809a77aef8"} Feb 02 09:19:56 crc kubenswrapper[4720]: I0202 09:19:56.306446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-fz4nc" event={"ID":"ff197c22-b3b8-4c01-bcbc-0d1a6660844a","Type":"ContainerStarted","Data":"fcd07f9017267fd8ef60ed3a325745bf6a9f21f06f3f6a642accfe640df4a454"} Feb 02 09:19:56 crc kubenswrapper[4720]: I0202 09:19:56.307060 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:19:56 crc kubenswrapper[4720]: I0202 09:19:56.312172 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52efc47f-bb34-4935-9b64-94e52a883272","Type":"ContainerStarted","Data":"9767ca1e6a0369f56dcdf77d29dd95b851744fcdd4c452e25e3cc11a1c4986d5"} Feb 02 09:19:56 crc kubenswrapper[4720]: I0202 09:19:56.347769 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-759799d765-fz4nc" podStartSLOduration=3.347743345 podStartE2EDuration="3.347743345s" podCreationTimestamp="2026-02-02 09:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:19:56.333364444 +0000 UTC m=+1430.188990030" watchObservedRunningTime="2026-02-02 09:19:56.347743345 +0000 UTC m=+1430.203368931" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.504159 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.598915 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-5jrzr"] Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.600252 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerName="dnsmasq-dns" containerID="cri-o://0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4" gracePeriod=10 Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.847701 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb847fbb7-k9699"] Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.849482 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.885511 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb847fbb7-k9699"] Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958414 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-config\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958779 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-dns-svc\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958800 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958838 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958910 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958939 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4lrj\" (UniqueName: \"kubernetes.io/projected/dfbef352-9960-44a8-b50d-02a480f008ca-kube-api-access-q4lrj\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:03 crc kubenswrapper[4720]: I0202 09:20:03.958967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060623 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-config\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060713 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-dns-svc\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060728 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4lrj\" (UniqueName: \"kubernetes.io/projected/dfbef352-9960-44a8-b50d-02a480f008ca-kube-api-access-q4lrj\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.060864 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.061769 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.061805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-config\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.062413 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-dns-svc\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.062428 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.062619 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.063270 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfbef352-9960-44a8-b50d-02a480f008ca-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.081742 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4lrj\" (UniqueName: \"kubernetes.io/projected/dfbef352-9960-44a8-b50d-02a480f008ca-kube-api-access-q4lrj\") pod \"dnsmasq-dns-5bb847fbb7-k9699\" (UID: \"dfbef352-9960-44a8-b50d-02a480f008ca\") " pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.139714 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.175684 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.267731 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-nb\") pod \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.268104 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-sb\") pod \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.268290 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvk4\" (UniqueName: \"kubernetes.io/projected/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-kube-api-access-qqvk4\") pod \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.268317 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-config\") pod \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.268421 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-swift-storage-0\") pod \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.268479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-svc\") pod \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\" (UID: \"45c0a1be-8f81-4819-bd4b-29ba05a8bce2\") " Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.283259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-kube-api-access-qqvk4" (OuterVolumeSpecName: "kube-api-access-qqvk4") pod "45c0a1be-8f81-4819-bd4b-29ba05a8bce2" (UID: "45c0a1be-8f81-4819-bd4b-29ba05a8bce2"). InnerVolumeSpecName "kube-api-access-qqvk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.317394 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45c0a1be-8f81-4819-bd4b-29ba05a8bce2" (UID: "45c0a1be-8f81-4819-bd4b-29ba05a8bce2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.329841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-config" (OuterVolumeSpecName: "config") pod "45c0a1be-8f81-4819-bd4b-29ba05a8bce2" (UID: "45c0a1be-8f81-4819-bd4b-29ba05a8bce2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.332846 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45c0a1be-8f81-4819-bd4b-29ba05a8bce2" (UID: "45c0a1be-8f81-4819-bd4b-29ba05a8bce2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.333144 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45c0a1be-8f81-4819-bd4b-29ba05a8bce2" (UID: "45c0a1be-8f81-4819-bd4b-29ba05a8bce2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.334156 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "45c0a1be-8f81-4819-bd4b-29ba05a8bce2" (UID: "45c0a1be-8f81-4819-bd4b-29ba05a8bce2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.371852 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvk4\" (UniqueName: \"kubernetes.io/projected/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-kube-api-access-qqvk4\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.372192 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.372216 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.372230 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.372242 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.372254 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45c0a1be-8f81-4819-bd4b-29ba05a8bce2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.431997 4720 generic.go:334] "Generic (PLEG): container finished" podID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerID="0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4" exitCode=0 Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.432059 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" event={"ID":"45c0a1be-8f81-4819-bd4b-29ba05a8bce2","Type":"ContainerDied","Data":"0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4"} Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.432098 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" event={"ID":"45c0a1be-8f81-4819-bd4b-29ba05a8bce2","Type":"ContainerDied","Data":"2b1014bbce5d56dd5c30ed7400d3f80386796ffde987f65b02314eb74b7f1703"} Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.432100 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559f4fbd7-5jrzr" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.432116 4720 scope.go:117] "RemoveContainer" containerID="0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.468951 4720 scope.go:117] "RemoveContainer" containerID="20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.483054 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-5jrzr"] Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.493258 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6559f4fbd7-5jrzr"] Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.494858 4720 scope.go:117] "RemoveContainer" containerID="0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4" Feb 02 09:20:04 crc kubenswrapper[4720]: E0202 09:20:04.495316 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4\": container with ID starting with 0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4 not found: ID does not exist" containerID="0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.495342 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4"} err="failed to get container status \"0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4\": rpc error: code = NotFound desc = could not find container \"0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4\": container with ID starting with 0a83a2f2a39b690752fd424cde8b717b3c4430a4af4bdfa19984f54351932ba4 not found: ID does not exist" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.495363 4720 scope.go:117] "RemoveContainer" containerID="20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209" Feb 02 09:20:04 crc kubenswrapper[4720]: E0202 09:20:04.495682 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209\": container with ID starting with 20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209 not found: ID does not exist" containerID="20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209" Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.495715 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209"} err="failed to get container status \"20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209\": rpc error: code = NotFound desc = could not find container \"20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209\": container with ID starting with 20f7aa8b4f9a8e612fe676f3fc696e9a9c6a56410410d12afc881a919a17d209 not found: ID does not exist" Feb 02 09:20:04 crc kubenswrapper[4720]: W0202 09:20:04.645595 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfbef352_9960_44a8_b50d_02a480f008ca.slice/crio-8718d068496d3739ae5512e0a77f6f2b30814904b296a72c66fd0b456e08efa9 WatchSource:0}: Error finding container 8718d068496d3739ae5512e0a77f6f2b30814904b296a72c66fd0b456e08efa9: Status 404 returned error can't find the container with id 8718d068496d3739ae5512e0a77f6f2b30814904b296a72c66fd0b456e08efa9 Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.646179 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb847fbb7-k9699"] Feb 02 09:20:04 crc kubenswrapper[4720]: I0202 09:20:04.898848 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" path="/var/lib/kubelet/pods/45c0a1be-8f81-4819-bd4b-29ba05a8bce2/volumes" Feb 02 09:20:05 crc kubenswrapper[4720]: E0202 09:20:05.224082 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfbef352_9960_44a8_b50d_02a480f008ca.slice/crio-11c1a4f068d1704e9b4aa749d2ce701a3588997543cb764009beb33ac8766ca3.scope\": RecentStats: unable to find data in memory cache]" Feb 02 09:20:05 crc kubenswrapper[4720]: I0202 09:20:05.445993 4720 generic.go:334] "Generic (PLEG): container finished" podID="dfbef352-9960-44a8-b50d-02a480f008ca" containerID="11c1a4f068d1704e9b4aa749d2ce701a3588997543cb764009beb33ac8766ca3" exitCode=0 Feb 02 09:20:05 crc kubenswrapper[4720]: I0202 09:20:05.446101 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" event={"ID":"dfbef352-9960-44a8-b50d-02a480f008ca","Type":"ContainerDied","Data":"11c1a4f068d1704e9b4aa749d2ce701a3588997543cb764009beb33ac8766ca3"} Feb 02 09:20:05 crc kubenswrapper[4720]: I0202 09:20:05.446358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" event={"ID":"dfbef352-9960-44a8-b50d-02a480f008ca","Type":"ContainerStarted","Data":"8718d068496d3739ae5512e0a77f6f2b30814904b296a72c66fd0b456e08efa9"} Feb 02 09:20:06 crc kubenswrapper[4720]: I0202 09:20:06.462394 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" event={"ID":"dfbef352-9960-44a8-b50d-02a480f008ca","Type":"ContainerStarted","Data":"a4ef474422f0bacba38a8d6e085282e6c352cbecb1ad3483c4903b14e9274c75"} Feb 02 09:20:06 crc kubenswrapper[4720]: I0202 09:20:06.463120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:06 crc kubenswrapper[4720]: I0202 09:20:06.488689 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" podStartSLOduration=3.48866294 podStartE2EDuration="3.48866294s" podCreationTimestamp="2026-02-02 09:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:20:06.485286138 +0000 UTC m=+1440.340911704" watchObservedRunningTime="2026-02-02 09:20:06.48866294 +0000 UTC m=+1440.344288526" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.178327 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bb847fbb7-k9699" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.300108 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759799d765-fz4nc"] Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.300475 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-759799d765-fz4nc" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerName="dnsmasq-dns" containerID="cri-o://fcd07f9017267fd8ef60ed3a325745bf6a9f21f06f3f6a642accfe640df4a454" gracePeriod=10 Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.557379 4720 generic.go:334] "Generic (PLEG): container finished" podID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerID="fcd07f9017267fd8ef60ed3a325745bf6a9f21f06f3f6a642accfe640df4a454" exitCode=0 Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.557534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-fz4nc" event={"ID":"ff197c22-b3b8-4c01-bcbc-0d1a6660844a","Type":"ContainerDied","Data":"fcd07f9017267fd8ef60ed3a325745bf6a9f21f06f3f6a642accfe640df4a454"} Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.788765 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810078 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-nb\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810164 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-config\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810194 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhsjj\" (UniqueName: \"kubernetes.io/projected/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-kube-api-access-nhsjj\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810412 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-svc\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-swift-storage-0\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810693 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-sb\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.810762 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-openstack-edpm-ipam\") pod \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\" (UID: \"ff197c22-b3b8-4c01-bcbc-0d1a6660844a\") " Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.817462 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-kube-api-access-nhsjj" (OuterVolumeSpecName: "kube-api-access-nhsjj") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "kube-api-access-nhsjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.883448 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.889704 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.889830 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.893038 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.898935 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-config" (OuterVolumeSpecName: "config") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.912583 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-config\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.912612 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhsjj\" (UniqueName: \"kubernetes.io/projected/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-kube-api-access-nhsjj\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.912622 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.912631 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.912640 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.912648 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:14 crc kubenswrapper[4720]: I0202 09:20:14.917115 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff197c22-b3b8-4c01-bcbc-0d1a6660844a" (UID: "ff197c22-b3b8-4c01-bcbc-0d1a6660844a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.014432 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff197c22-b3b8-4c01-bcbc-0d1a6660844a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.569729 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759799d765-fz4nc" event={"ID":"ff197c22-b3b8-4c01-bcbc-0d1a6660844a","Type":"ContainerDied","Data":"95d2456fbb84ac8859c3159d1ae77fc5f858bee15530681040150594b960ac2e"} Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.570092 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759799d765-fz4nc" Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.570387 4720 scope.go:117] "RemoveContainer" containerID="fcd07f9017267fd8ef60ed3a325745bf6a9f21f06f3f6a642accfe640df4a454" Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.623794 4720 scope.go:117] "RemoveContainer" containerID="f220064efe04b898f240b13ce7fb2e1607e53f5a7a3a5b63acddae809a77aef8" Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.623926 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759799d765-fz4nc"] Feb 02 09:20:15 crc kubenswrapper[4720]: I0202 09:20:15.632389 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-759799d765-fz4nc"] Feb 02 09:20:16 crc kubenswrapper[4720]: I0202 09:20:16.491648 4720 scope.go:117] "RemoveContainer" containerID="b164e246c3b2b6128ef42247419bed1600f944ffe71d6e4802f564f73a24c193" Feb 02 09:20:16 crc kubenswrapper[4720]: I0202 09:20:16.909470 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" path="/var/lib/kubelet/pods/ff197c22-b3b8-4c01-bcbc-0d1a6660844a/volumes" Feb 02 09:20:17 crc kubenswrapper[4720]: I0202 09:20:17.902372 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:20:17 crc kubenswrapper[4720]: I0202 09:20:17.902867 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.346515 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms"] Feb 02 09:20:27 crc kubenswrapper[4720]: E0202 09:20:27.347671 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerName="dnsmasq-dns" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.347695 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerName="dnsmasq-dns" Feb 02 09:20:27 crc kubenswrapper[4720]: E0202 09:20:27.347731 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerName="init" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.347745 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerName="init" Feb 02 09:20:27 crc kubenswrapper[4720]: E0202 09:20:27.347775 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerName="init" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.347786 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerName="init" Feb 02 09:20:27 crc kubenswrapper[4720]: E0202 09:20:27.348554 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerName="dnsmasq-dns" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.348573 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerName="dnsmasq-dns" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.348964 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c0a1be-8f81-4819-bd4b-29ba05a8bce2" containerName="dnsmasq-dns" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.348989 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff197c22-b3b8-4c01-bcbc-0d1a6660844a" containerName="dnsmasq-dns" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.350044 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.371488 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.371857 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.372023 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.372196 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.375340 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms"] Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.388498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.388702 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lwln\" (UniqueName: \"kubernetes.io/projected/d1257ae5-08dc-4977-9268-d988d889a1e3-kube-api-access-2lwln\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.389064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.389338 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.491342 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.491443 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.491553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.491686 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lwln\" (UniqueName: \"kubernetes.io/projected/d1257ae5-08dc-4977-9268-d988d889a1e3-kube-api-access-2lwln\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.497336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.497367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.497678 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.510471 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lwln\" (UniqueName: \"kubernetes.io/projected/d1257ae5-08dc-4977-9268-d988d889a1e3-kube-api-access-2lwln\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.692863 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.709774 4720 generic.go:334] "Generic (PLEG): container finished" podID="5123a4f9-6161-445e-a17c-184cfbe9c4bb" containerID="9bc7174ba3d3268541ea4948210346ba70b49b4e42a2fa520ff1ee42ad619d12" exitCode=0 Feb 02 09:20:27 crc kubenswrapper[4720]: I0202 09:20:27.709943 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5123a4f9-6161-445e-a17c-184cfbe9c4bb","Type":"ContainerDied","Data":"9bc7174ba3d3268541ea4948210346ba70b49b4e42a2fa520ff1ee42ad619d12"} Feb 02 09:20:28 crc kubenswrapper[4720]: W0202 09:20:28.271889 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1257ae5_08dc_4977_9268_d988d889a1e3.slice/crio-0de4091bdd4313f00d5ba31f07423626677f8d0e709ead85e0867e340acaa5f0 WatchSource:0}: Error finding container 0de4091bdd4313f00d5ba31f07423626677f8d0e709ead85e0867e340acaa5f0: Status 404 returned error can't find the container with id 0de4091bdd4313f00d5ba31f07423626677f8d0e709ead85e0867e340acaa5f0 Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.275254 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms"] Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.725462 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5123a4f9-6161-445e-a17c-184cfbe9c4bb","Type":"ContainerStarted","Data":"5bab4de317aca816786f801bb11cdfaffd87ef300115d10f041fea5d101000a0"} Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.726194 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.727860 4720 generic.go:334] "Generic (PLEG): container finished" podID="52efc47f-bb34-4935-9b64-94e52a883272" containerID="9767ca1e6a0369f56dcdf77d29dd95b851744fcdd4c452e25e3cc11a1c4986d5" exitCode=0 Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.727935 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52efc47f-bb34-4935-9b64-94e52a883272","Type":"ContainerDied","Data":"9767ca1e6a0369f56dcdf77d29dd95b851744fcdd4c452e25e3cc11a1c4986d5"} Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.729955 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" event={"ID":"d1257ae5-08dc-4977-9268-d988d889a1e3","Type":"ContainerStarted","Data":"0de4091bdd4313f00d5ba31f07423626677f8d0e709ead85e0867e340acaa5f0"} Feb 02 09:20:28 crc kubenswrapper[4720]: I0202 09:20:28.760129 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.760099537 podStartE2EDuration="37.760099537s" podCreationTimestamp="2026-02-02 09:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:20:28.749654091 +0000 UTC m=+1462.605279657" watchObservedRunningTime="2026-02-02 09:20:28.760099537 +0000 UTC m=+1462.615725103" Feb 02 09:20:29 crc kubenswrapper[4720]: I0202 09:20:29.778970 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52efc47f-bb34-4935-9b64-94e52a883272","Type":"ContainerStarted","Data":"62e42ca324ebadf0b71362b054a3d564f5efc97225b7bfbb93bf1e6baf25499f"} Feb 02 09:20:29 crc kubenswrapper[4720]: I0202 09:20:29.779658 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:20:29 crc kubenswrapper[4720]: I0202 09:20:29.802385 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.802369157 podStartE2EDuration="37.802369157s" podCreationTimestamp="2026-02-02 09:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:20:29.801241809 +0000 UTC m=+1463.656867385" watchObservedRunningTime="2026-02-02 09:20:29.802369157 +0000 UTC m=+1463.657994713" Feb 02 09:20:38 crc kubenswrapper[4720]: I0202 09:20:38.917561 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" event={"ID":"d1257ae5-08dc-4977-9268-d988d889a1e3","Type":"ContainerStarted","Data":"acb64753a0fea28762aab3783a863ed2fb6ec41ddd723671e99645590b68c89f"} Feb 02 09:20:38 crc kubenswrapper[4720]: I0202 09:20:38.940288 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" podStartSLOduration=2.094958029 podStartE2EDuration="11.94025765s" podCreationTimestamp="2026-02-02 09:20:27 +0000 UTC" firstStartedPulling="2026-02-02 09:20:28.276185655 +0000 UTC m=+1462.131811211" lastFinishedPulling="2026-02-02 09:20:38.121485276 +0000 UTC m=+1471.977110832" observedRunningTime="2026-02-02 09:20:38.932675536 +0000 UTC m=+1472.788301102" watchObservedRunningTime="2026-02-02 09:20:38.94025765 +0000 UTC m=+1472.795883246" Feb 02 09:20:42 crc kubenswrapper[4720]: I0202 09:20:42.027173 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 09:20:43 crc kubenswrapper[4720]: I0202 09:20:43.060011 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 09:20:47 crc kubenswrapper[4720]: I0202 09:20:47.901986 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:20:47 crc kubenswrapper[4720]: I0202 09:20:47.902580 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:20:47 crc kubenswrapper[4720]: I0202 09:20:47.902684 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:20:47 crc kubenswrapper[4720]: I0202 09:20:47.903576 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab31c90e1e148f73f162e2be60fd4d3028bdf40b46acc10afc7a7e25161d4a04"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:20:47 crc kubenswrapper[4720]: I0202 09:20:47.903651 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://ab31c90e1e148f73f162e2be60fd4d3028bdf40b46acc10afc7a7e25161d4a04" gracePeriod=600 Feb 02 09:20:49 crc kubenswrapper[4720]: I0202 09:20:49.038375 4720 generic.go:334] "Generic (PLEG): container finished" podID="d1257ae5-08dc-4977-9268-d988d889a1e3" containerID="acb64753a0fea28762aab3783a863ed2fb6ec41ddd723671e99645590b68c89f" exitCode=0 Feb 02 09:20:49 crc kubenswrapper[4720]: I0202 09:20:49.038467 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" event={"ID":"d1257ae5-08dc-4977-9268-d988d889a1e3","Type":"ContainerDied","Data":"acb64753a0fea28762aab3783a863ed2fb6ec41ddd723671e99645590b68c89f"} Feb 02 09:20:49 crc kubenswrapper[4720]: I0202 09:20:49.042082 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="ab31c90e1e148f73f162e2be60fd4d3028bdf40b46acc10afc7a7e25161d4a04" exitCode=0 Feb 02 09:20:49 crc kubenswrapper[4720]: I0202 09:20:49.042122 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"ab31c90e1e148f73f162e2be60fd4d3028bdf40b46acc10afc7a7e25161d4a04"} Feb 02 09:20:49 crc kubenswrapper[4720]: I0202 09:20:49.042176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97"} Feb 02 09:20:49 crc kubenswrapper[4720]: I0202 09:20:49.042192 4720 scope.go:117] "RemoveContainer" containerID="06c1946f321e503f0a5c8927a27c1a16ffb7563c527d106ec0880fcbe22267e0" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.744734 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.868874 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-ssh-key-openstack-edpm-ipam\") pod \"d1257ae5-08dc-4977-9268-d988d889a1e3\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.869071 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-repo-setup-combined-ca-bundle\") pod \"d1257ae5-08dc-4977-9268-d988d889a1e3\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.869190 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-inventory\") pod \"d1257ae5-08dc-4977-9268-d988d889a1e3\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.869439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lwln\" (UniqueName: \"kubernetes.io/projected/d1257ae5-08dc-4977-9268-d988d889a1e3-kube-api-access-2lwln\") pod \"d1257ae5-08dc-4977-9268-d988d889a1e3\" (UID: \"d1257ae5-08dc-4977-9268-d988d889a1e3\") " Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.875919 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1257ae5-08dc-4977-9268-d988d889a1e3-kube-api-access-2lwln" (OuterVolumeSpecName: "kube-api-access-2lwln") pod "d1257ae5-08dc-4977-9268-d988d889a1e3" (UID: "d1257ae5-08dc-4977-9268-d988d889a1e3"). InnerVolumeSpecName "kube-api-access-2lwln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.876696 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d1257ae5-08dc-4977-9268-d988d889a1e3" (UID: "d1257ae5-08dc-4977-9268-d988d889a1e3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.906123 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1257ae5-08dc-4977-9268-d988d889a1e3" (UID: "d1257ae5-08dc-4977-9268-d988d889a1e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.922096 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-inventory" (OuterVolumeSpecName: "inventory") pod "d1257ae5-08dc-4977-9268-d988d889a1e3" (UID: "d1257ae5-08dc-4977-9268-d988d889a1e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.973020 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.973078 4720 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.973103 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1257ae5-08dc-4977-9268-d988d889a1e3-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:50 crc kubenswrapper[4720]: I0202 09:20:50.973124 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lwln\" (UniqueName: \"kubernetes.io/projected/d1257ae5-08dc-4977-9268-d988d889a1e3-kube-api-access-2lwln\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.075954 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" event={"ID":"d1257ae5-08dc-4977-9268-d988d889a1e3","Type":"ContainerDied","Data":"0de4091bdd4313f00d5ba31f07423626677f8d0e709ead85e0867e340acaa5f0"} Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.075999 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de4091bdd4313f00d5ba31f07423626677f8d0e709ead85e0867e340acaa5f0" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.076069 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.229591 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx"] Feb 02 09:20:51 crc kubenswrapper[4720]: E0202 09:20:51.230213 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1257ae5-08dc-4977-9268-d988d889a1e3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.230274 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1257ae5-08dc-4977-9268-d988d889a1e3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.230678 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1257ae5-08dc-4977-9268-d988d889a1e3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.231788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.235433 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.235671 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.237456 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.237555 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.259849 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx"] Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.380219 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvw8m\" (UniqueName: \"kubernetes.io/projected/5dc086de-1441-4dc6-b225-843ce650e62c-kube-api-access-zvw8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.380614 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.380791 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.483161 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.483217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.483328 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvw8m\" (UniqueName: \"kubernetes.io/projected/5dc086de-1441-4dc6-b225-843ce650e62c-kube-api-access-zvw8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.491493 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.491512 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.512265 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvw8m\" (UniqueName: \"kubernetes.io/projected/5dc086de-1441-4dc6-b225-843ce650e62c-kube-api-access-zvw8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kp2wx\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:51 crc kubenswrapper[4720]: I0202 09:20:51.554985 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:52 crc kubenswrapper[4720]: I0202 09:20:52.171987 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx"] Feb 02 09:20:53 crc kubenswrapper[4720]: I0202 09:20:53.098655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" event={"ID":"5dc086de-1441-4dc6-b225-843ce650e62c","Type":"ContainerStarted","Data":"157cb6da3bedf12e0288e6fa3527bb0a866a6497a06c091b0f342bb4f46ee38a"} Feb 02 09:20:53 crc kubenswrapper[4720]: I0202 09:20:53.099405 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" event={"ID":"5dc086de-1441-4dc6-b225-843ce650e62c","Type":"ContainerStarted","Data":"db8a3ef5e94a37131bbf53d79fbd01ff29797626a43ab0f6e2246f2ab2ac74b0"} Feb 02 09:20:53 crc kubenswrapper[4720]: I0202 09:20:53.127675 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" podStartSLOduration=1.642864682 podStartE2EDuration="2.127644474s" podCreationTimestamp="2026-02-02 09:20:51 +0000 UTC" firstStartedPulling="2026-02-02 09:20:52.188809829 +0000 UTC m=+1486.044435385" lastFinishedPulling="2026-02-02 09:20:52.673589581 +0000 UTC m=+1486.529215177" observedRunningTime="2026-02-02 09:20:53.118627254 +0000 UTC m=+1486.974252850" watchObservedRunningTime="2026-02-02 09:20:53.127644474 +0000 UTC m=+1486.983270060" Feb 02 09:20:56 crc kubenswrapper[4720]: I0202 09:20:56.137249 4720 generic.go:334] "Generic (PLEG): container finished" podID="5dc086de-1441-4dc6-b225-843ce650e62c" containerID="157cb6da3bedf12e0288e6fa3527bb0a866a6497a06c091b0f342bb4f46ee38a" exitCode=0 Feb 02 09:20:56 crc kubenswrapper[4720]: I0202 09:20:56.137306 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" event={"ID":"5dc086de-1441-4dc6-b225-843ce650e62c","Type":"ContainerDied","Data":"157cb6da3bedf12e0288e6fa3527bb0a866a6497a06c091b0f342bb4f46ee38a"} Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.724605 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.825499 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-inventory\") pod \"5dc086de-1441-4dc6-b225-843ce650e62c\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.825703 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-ssh-key-openstack-edpm-ipam\") pod \"5dc086de-1441-4dc6-b225-843ce650e62c\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.825769 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvw8m\" (UniqueName: \"kubernetes.io/projected/5dc086de-1441-4dc6-b225-843ce650e62c-kube-api-access-zvw8m\") pod \"5dc086de-1441-4dc6-b225-843ce650e62c\" (UID: \"5dc086de-1441-4dc6-b225-843ce650e62c\") " Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.838249 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc086de-1441-4dc6-b225-843ce650e62c-kube-api-access-zvw8m" (OuterVolumeSpecName: "kube-api-access-zvw8m") pod "5dc086de-1441-4dc6-b225-843ce650e62c" (UID: "5dc086de-1441-4dc6-b225-843ce650e62c"). InnerVolumeSpecName "kube-api-access-zvw8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.862739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5dc086de-1441-4dc6-b225-843ce650e62c" (UID: "5dc086de-1441-4dc6-b225-843ce650e62c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.864417 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-inventory" (OuterVolumeSpecName: "inventory") pod "5dc086de-1441-4dc6-b225-843ce650e62c" (UID: "5dc086de-1441-4dc6-b225-843ce650e62c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.927705 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.927739 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvw8m\" (UniqueName: \"kubernetes.io/projected/5dc086de-1441-4dc6-b225-843ce650e62c-kube-api-access-zvw8m\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:57 crc kubenswrapper[4720]: I0202 09:20:57.927749 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5dc086de-1441-4dc6-b225-843ce650e62c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.174863 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" event={"ID":"5dc086de-1441-4dc6-b225-843ce650e62c","Type":"ContainerDied","Data":"db8a3ef5e94a37131bbf53d79fbd01ff29797626a43ab0f6e2246f2ab2ac74b0"} Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.175194 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db8a3ef5e94a37131bbf53d79fbd01ff29797626a43ab0f6e2246f2ab2ac74b0" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.174928 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kp2wx" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.314085 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl"] Feb 02 09:20:58 crc kubenswrapper[4720]: E0202 09:20:58.315122 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc086de-1441-4dc6-b225-843ce650e62c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.315164 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc086de-1441-4dc6-b225-843ce650e62c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.315662 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc086de-1441-4dc6-b225-843ce650e62c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.317609 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.320623 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.321114 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.321360 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.321452 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.339776 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl"] Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.438993 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.439521 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.439740 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.439939 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2j6t\" (UniqueName: \"kubernetes.io/projected/f616d658-9ec0-457b-a76a-fd6035250f16-kube-api-access-d2j6t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.542632 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.542751 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.542830 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2j6t\" (UniqueName: \"kubernetes.io/projected/f616d658-9ec0-457b-a76a-fd6035250f16-kube-api-access-d2j6t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.543000 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.555781 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.556277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.556383 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.560399 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2j6t\" (UniqueName: \"kubernetes.io/projected/f616d658-9ec0-457b-a76a-fd6035250f16-kube-api-access-d2j6t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:58 crc kubenswrapper[4720]: I0202 09:20:58.638736 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:20:59 crc kubenswrapper[4720]: I0202 09:20:59.225059 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl"] Feb 02 09:21:00 crc kubenswrapper[4720]: I0202 09:21:00.199638 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" event={"ID":"f616d658-9ec0-457b-a76a-fd6035250f16","Type":"ContainerStarted","Data":"73d6853906fedb34ddfde4e8e168fae326b093e8b5eee236913fd1bdf9a4abb3"} Feb 02 09:21:01 crc kubenswrapper[4720]: I0202 09:21:01.214390 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" event={"ID":"f616d658-9ec0-457b-a76a-fd6035250f16","Type":"ContainerStarted","Data":"ede432b472ff42585ceccf6516aa932dc565f1eb5944bf6e67f57c962703f83c"} Feb 02 09:21:01 crc kubenswrapper[4720]: I0202 09:21:01.256287 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" podStartSLOduration=2.722448718 podStartE2EDuration="3.256265374s" podCreationTimestamp="2026-02-02 09:20:58 +0000 UTC" firstStartedPulling="2026-02-02 09:20:59.230877217 +0000 UTC m=+1493.086502793" lastFinishedPulling="2026-02-02 09:20:59.764693883 +0000 UTC m=+1493.620319449" observedRunningTime="2026-02-02 09:21:01.240427928 +0000 UTC m=+1495.096053494" watchObservedRunningTime="2026-02-02 09:21:01.256265374 +0000 UTC m=+1495.111890940" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.634771 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hfbr7"] Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.637631 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.651136 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfbr7"] Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.731476 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wws7\" (UniqueName: \"kubernetes.io/projected/6acb674a-8829-428b-ac3c-d45038e79786-kube-api-access-8wws7\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.731617 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-catalog-content\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.731705 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-utilities\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.834235 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-utilities\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.834510 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wws7\" (UniqueName: \"kubernetes.io/projected/6acb674a-8829-428b-ac3c-d45038e79786-kube-api-access-8wws7\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.834618 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-catalog-content\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.834954 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-utilities\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.835044 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-catalog-content\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.855762 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wws7\" (UniqueName: \"kubernetes.io/projected/6acb674a-8829-428b-ac3c-d45038e79786-kube-api-access-8wws7\") pod \"community-operators-hfbr7\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:02 crc kubenswrapper[4720]: I0202 09:21:02.973669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:03 crc kubenswrapper[4720]: I0202 09:21:03.569804 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfbr7"] Feb 02 09:21:03 crc kubenswrapper[4720]: W0202 09:21:03.575157 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6acb674a_8829_428b_ac3c_d45038e79786.slice/crio-be0e65e68cfca49b261827bb70f0d5c4969fdc161091ea34ab6f7ad217a20d9a WatchSource:0}: Error finding container be0e65e68cfca49b261827bb70f0d5c4969fdc161091ea34ab6f7ad217a20d9a: Status 404 returned error can't find the container with id be0e65e68cfca49b261827bb70f0d5c4969fdc161091ea34ab6f7ad217a20d9a Feb 02 09:21:04 crc kubenswrapper[4720]: I0202 09:21:04.260509 4720 generic.go:334] "Generic (PLEG): container finished" podID="6acb674a-8829-428b-ac3c-d45038e79786" containerID="7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371" exitCode=0 Feb 02 09:21:04 crc kubenswrapper[4720]: I0202 09:21:04.260797 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerDied","Data":"7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371"} Feb 02 09:21:04 crc kubenswrapper[4720]: I0202 09:21:04.261055 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerStarted","Data":"be0e65e68cfca49b261827bb70f0d5c4969fdc161091ea34ab6f7ad217a20d9a"} Feb 02 09:21:05 crc kubenswrapper[4720]: I0202 09:21:05.272325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerStarted","Data":"31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a"} Feb 02 09:21:06 crc kubenswrapper[4720]: I0202 09:21:06.285842 4720 generic.go:334] "Generic (PLEG): container finished" podID="6acb674a-8829-428b-ac3c-d45038e79786" containerID="31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a" exitCode=0 Feb 02 09:21:06 crc kubenswrapper[4720]: I0202 09:21:06.285928 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerDied","Data":"31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a"} Feb 02 09:21:07 crc kubenswrapper[4720]: I0202 09:21:07.296850 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerStarted","Data":"2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8"} Feb 02 09:21:07 crc kubenswrapper[4720]: I0202 09:21:07.327897 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hfbr7" podStartSLOduration=2.881924804 podStartE2EDuration="5.327852521s" podCreationTimestamp="2026-02-02 09:21:02 +0000 UTC" firstStartedPulling="2026-02-02 09:21:04.26366924 +0000 UTC m=+1498.119294836" lastFinishedPulling="2026-02-02 09:21:06.709596987 +0000 UTC m=+1500.565222553" observedRunningTime="2026-02-02 09:21:07.325472064 +0000 UTC m=+1501.181097650" watchObservedRunningTime="2026-02-02 09:21:07.327852521 +0000 UTC m=+1501.183478087" Feb 02 09:21:12 crc kubenswrapper[4720]: I0202 09:21:12.975159 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:12 crc kubenswrapper[4720]: I0202 09:21:12.975859 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:13 crc kubenswrapper[4720]: I0202 09:21:13.047790 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:13 crc kubenswrapper[4720]: I0202 09:21:13.434211 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:13 crc kubenswrapper[4720]: I0202 09:21:13.490195 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfbr7"] Feb 02 09:21:15 crc kubenswrapper[4720]: I0202 09:21:15.393598 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hfbr7" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="registry-server" containerID="cri-o://2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8" gracePeriod=2 Feb 02 09:21:15 crc kubenswrapper[4720]: I0202 09:21:15.958546 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.123635 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-utilities\") pod \"6acb674a-8829-428b-ac3c-d45038e79786\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.123737 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-catalog-content\") pod \"6acb674a-8829-428b-ac3c-d45038e79786\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.124033 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wws7\" (UniqueName: \"kubernetes.io/projected/6acb674a-8829-428b-ac3c-d45038e79786-kube-api-access-8wws7\") pod \"6acb674a-8829-428b-ac3c-d45038e79786\" (UID: \"6acb674a-8829-428b-ac3c-d45038e79786\") " Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.125115 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-utilities" (OuterVolumeSpecName: "utilities") pod "6acb674a-8829-428b-ac3c-d45038e79786" (UID: "6acb674a-8829-428b-ac3c-d45038e79786"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.132021 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acb674a-8829-428b-ac3c-d45038e79786-kube-api-access-8wws7" (OuterVolumeSpecName: "kube-api-access-8wws7") pod "6acb674a-8829-428b-ac3c-d45038e79786" (UID: "6acb674a-8829-428b-ac3c-d45038e79786"). InnerVolumeSpecName "kube-api-access-8wws7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.180957 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6acb674a-8829-428b-ac3c-d45038e79786" (UID: "6acb674a-8829-428b-ac3c-d45038e79786"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.227077 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wws7\" (UniqueName: \"kubernetes.io/projected/6acb674a-8829-428b-ac3c-d45038e79786-kube-api-access-8wws7\") on node \"crc\" DevicePath \"\"" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.227115 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.227127 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acb674a-8829-428b-ac3c-d45038e79786-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.429836 4720 generic.go:334] "Generic (PLEG): container finished" podID="6acb674a-8829-428b-ac3c-d45038e79786" containerID="2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8" exitCode=0 Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.429932 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfbr7" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.429935 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerDied","Data":"2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8"} Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.430004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfbr7" event={"ID":"6acb674a-8829-428b-ac3c-d45038e79786","Type":"ContainerDied","Data":"be0e65e68cfca49b261827bb70f0d5c4969fdc161091ea34ab6f7ad217a20d9a"} Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.430025 4720 scope.go:117] "RemoveContainer" containerID="2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.466553 4720 scope.go:117] "RemoveContainer" containerID="31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.479035 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfbr7"] Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.490110 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hfbr7"] Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.502469 4720 scope.go:117] "RemoveContainer" containerID="7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.561931 4720 scope.go:117] "RemoveContainer" containerID="2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8" Feb 02 09:21:16 crc kubenswrapper[4720]: E0202 09:21:16.562675 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8\": container with ID starting with 2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8 not found: ID does not exist" containerID="2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.562731 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8"} err="failed to get container status \"2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8\": rpc error: code = NotFound desc = could not find container \"2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8\": container with ID starting with 2d376b6538e626e0e65b230a312bb8c67af978a5cc0c4d73730c9d7fe20c3ff8 not found: ID does not exist" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.562762 4720 scope.go:117] "RemoveContainer" containerID="31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a" Feb 02 09:21:16 crc kubenswrapper[4720]: E0202 09:21:16.563136 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a\": container with ID starting with 31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a not found: ID does not exist" containerID="31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.563183 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a"} err="failed to get container status \"31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a\": rpc error: code = NotFound desc = could not find container \"31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a\": container with ID starting with 31a1fbbbbbf0a54309b39954a587fc5235c335911c71702cc362f3a47ec5697a not found: ID does not exist" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.563203 4720 scope.go:117] "RemoveContainer" containerID="7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371" Feb 02 09:21:16 crc kubenswrapper[4720]: E0202 09:21:16.563894 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371\": container with ID starting with 7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371 not found: ID does not exist" containerID="7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.563951 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371"} err="failed to get container status \"7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371\": rpc error: code = NotFound desc = could not find container \"7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371\": container with ID starting with 7eed50c49fbbd1df3a0ae2940e970fcdfe004a6103cff166aabda1433ba33371 not found: ID does not exist" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.645255 4720 scope.go:117] "RemoveContainer" containerID="602ba461b7641e239dea25fd7e7bcedd3950fa631d75587f8accb918b8fd56a9" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.673248 4720 scope.go:117] "RemoveContainer" containerID="c54986bc443adbc25acefdad6156df95b0b0e3a23dd2b068bb83761d5827089e" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.728784 4720 scope.go:117] "RemoveContainer" containerID="3a7bd0c37ff704610c3790b8d0cb21067ff3f371f5ff8d33e71d6528f34be021" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.764073 4720 scope.go:117] "RemoveContainer" containerID="c4d4d5da4c13449463149ba9d7eb8e56606a3ab571524a9e1dbb373cb26796ff" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.812779 4720 scope.go:117] "RemoveContainer" containerID="324c2f20e4d24da4e428f6fddac0b8de274ce85319d59f223625777ab49caacd" Feb 02 09:21:16 crc kubenswrapper[4720]: I0202 09:21:16.899135 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acb674a-8829-428b-ac3c-d45038e79786" path="/var/lib/kubelet/pods/6acb674a-8829-428b-ac3c-d45038e79786/volumes" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.852865 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2spfv"] Feb 02 09:21:46 crc kubenswrapper[4720]: E0202 09:21:46.854485 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="registry-server" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.854521 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="registry-server" Feb 02 09:21:46 crc kubenswrapper[4720]: E0202 09:21:46.854554 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="extract-utilities" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.854572 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="extract-utilities" Feb 02 09:21:46 crc kubenswrapper[4720]: E0202 09:21:46.854628 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="extract-content" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.854647 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="extract-content" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.855207 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acb674a-8829-428b-ac3c-d45038e79786" containerName="registry-server" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.858223 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.879467 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2spfv"] Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.953855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-utilities\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.953920 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpkb5\" (UniqueName: \"kubernetes.io/projected/bda17c81-c795-4209-9f7d-be57fbbe47e6-kube-api-access-wpkb5\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:46 crc kubenswrapper[4720]: I0202 09:21:46.954308 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-catalog-content\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.055724 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-catalog-content\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.055831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-utilities\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.055872 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpkb5\" (UniqueName: \"kubernetes.io/projected/bda17c81-c795-4209-9f7d-be57fbbe47e6-kube-api-access-wpkb5\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.057095 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-catalog-content\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.057394 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-utilities\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.084691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpkb5\" (UniqueName: \"kubernetes.io/projected/bda17c81-c795-4209-9f7d-be57fbbe47e6-kube-api-access-wpkb5\") pod \"certified-operators-2spfv\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.200365 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.693889 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2spfv"] Feb 02 09:21:47 crc kubenswrapper[4720]: W0202 09:21:47.696255 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbda17c81_c795_4209_9f7d_be57fbbe47e6.slice/crio-aa32b9dae211f335c1b03147bef9464392be71416226a41dbab2e35e25a258f7 WatchSource:0}: Error finding container aa32b9dae211f335c1b03147bef9464392be71416226a41dbab2e35e25a258f7: Status 404 returned error can't find the container with id aa32b9dae211f335c1b03147bef9464392be71416226a41dbab2e35e25a258f7 Feb 02 09:21:47 crc kubenswrapper[4720]: I0202 09:21:47.794432 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerStarted","Data":"aa32b9dae211f335c1b03147bef9464392be71416226a41dbab2e35e25a258f7"} Feb 02 09:21:48 crc kubenswrapper[4720]: I0202 09:21:48.808392 4720 generic.go:334] "Generic (PLEG): container finished" podID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerID="c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5" exitCode=0 Feb 02 09:21:48 crc kubenswrapper[4720]: I0202 09:21:48.808444 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerDied","Data":"c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5"} Feb 02 09:21:48 crc kubenswrapper[4720]: I0202 09:21:48.811059 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:21:49 crc kubenswrapper[4720]: I0202 09:21:49.824830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerStarted","Data":"3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff"} Feb 02 09:21:50 crc kubenswrapper[4720]: I0202 09:21:50.835651 4720 generic.go:334] "Generic (PLEG): container finished" podID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerID="3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff" exitCode=0 Feb 02 09:21:50 crc kubenswrapper[4720]: I0202 09:21:50.837023 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerDied","Data":"3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff"} Feb 02 09:21:51 crc kubenswrapper[4720]: I0202 09:21:51.845433 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerStarted","Data":"375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba"} Feb 02 09:21:51 crc kubenswrapper[4720]: I0202 09:21:51.865412 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2spfv" podStartSLOduration=3.387848677 podStartE2EDuration="5.865397695s" podCreationTimestamp="2026-02-02 09:21:46 +0000 UTC" firstStartedPulling="2026-02-02 09:21:48.810784047 +0000 UTC m=+1542.666409603" lastFinishedPulling="2026-02-02 09:21:51.288333035 +0000 UTC m=+1545.143958621" observedRunningTime="2026-02-02 09:21:51.860974937 +0000 UTC m=+1545.716600503" watchObservedRunningTime="2026-02-02 09:21:51.865397695 +0000 UTC m=+1545.721023251" Feb 02 09:21:57 crc kubenswrapper[4720]: I0202 09:21:57.200680 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:57 crc kubenswrapper[4720]: I0202 09:21:57.201225 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:57 crc kubenswrapper[4720]: I0202 09:21:57.274090 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:57 crc kubenswrapper[4720]: I0202 09:21:57.957671 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:21:58 crc kubenswrapper[4720]: I0202 09:21:58.006542 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2spfv"] Feb 02 09:21:59 crc kubenswrapper[4720]: I0202 09:21:59.939536 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2spfv" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="registry-server" containerID="cri-o://375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba" gracePeriod=2 Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.523690 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.647459 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-catalog-content\") pod \"bda17c81-c795-4209-9f7d-be57fbbe47e6\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.647696 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-utilities\") pod \"bda17c81-c795-4209-9f7d-be57fbbe47e6\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.647784 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpkb5\" (UniqueName: \"kubernetes.io/projected/bda17c81-c795-4209-9f7d-be57fbbe47e6-kube-api-access-wpkb5\") pod \"bda17c81-c795-4209-9f7d-be57fbbe47e6\" (UID: \"bda17c81-c795-4209-9f7d-be57fbbe47e6\") " Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.649233 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-utilities" (OuterVolumeSpecName: "utilities") pod "bda17c81-c795-4209-9f7d-be57fbbe47e6" (UID: "bda17c81-c795-4209-9f7d-be57fbbe47e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.656246 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda17c81-c795-4209-9f7d-be57fbbe47e6-kube-api-access-wpkb5" (OuterVolumeSpecName: "kube-api-access-wpkb5") pod "bda17c81-c795-4209-9f7d-be57fbbe47e6" (UID: "bda17c81-c795-4209-9f7d-be57fbbe47e6"). InnerVolumeSpecName "kube-api-access-wpkb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.705431 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bda17c81-c795-4209-9f7d-be57fbbe47e6" (UID: "bda17c81-c795-4209-9f7d-be57fbbe47e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.751024 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpkb5\" (UniqueName: \"kubernetes.io/projected/bda17c81-c795-4209-9f7d-be57fbbe47e6-kube-api-access-wpkb5\") on node \"crc\" DevicePath \"\"" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.751068 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.751077 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda17c81-c795-4209-9f7d-be57fbbe47e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.958382 4720 generic.go:334] "Generic (PLEG): container finished" podID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerID="375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba" exitCode=0 Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.958445 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerDied","Data":"375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba"} Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.958514 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2spfv" event={"ID":"bda17c81-c795-4209-9f7d-be57fbbe47e6","Type":"ContainerDied","Data":"aa32b9dae211f335c1b03147bef9464392be71416226a41dbab2e35e25a258f7"} Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.958544 4720 scope.go:117] "RemoveContainer" containerID="375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba" Feb 02 09:22:00 crc kubenswrapper[4720]: I0202 09:22:00.958548 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2spfv" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.006582 4720 scope.go:117] "RemoveContainer" containerID="3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.008335 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2spfv"] Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.029169 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2spfv"] Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.048047 4720 scope.go:117] "RemoveContainer" containerID="c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.112321 4720 scope.go:117] "RemoveContainer" containerID="375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba" Feb 02 09:22:01 crc kubenswrapper[4720]: E0202 09:22:01.112904 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba\": container with ID starting with 375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba not found: ID does not exist" containerID="375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.112967 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba"} err="failed to get container status \"375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba\": rpc error: code = NotFound desc = could not find container \"375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba\": container with ID starting with 375d3a5855ab57be031bfe1bef1b0529d39b925c3c6cddc68dd0889939f42fba not found: ID does not exist" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.113018 4720 scope.go:117] "RemoveContainer" containerID="3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff" Feb 02 09:22:01 crc kubenswrapper[4720]: E0202 09:22:01.113470 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff\": container with ID starting with 3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff not found: ID does not exist" containerID="3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.113519 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff"} err="failed to get container status \"3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff\": rpc error: code = NotFound desc = could not find container \"3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff\": container with ID starting with 3992f6ce0479807f99feef2024c5e1229d59cba46c2a213c197e0b2090da5fff not found: ID does not exist" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.113549 4720 scope.go:117] "RemoveContainer" containerID="c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5" Feb 02 09:22:01 crc kubenswrapper[4720]: E0202 09:22:01.114179 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5\": container with ID starting with c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5 not found: ID does not exist" containerID="c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5" Feb 02 09:22:01 crc kubenswrapper[4720]: I0202 09:22:01.114222 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5"} err="failed to get container status \"c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5\": rpc error: code = NotFound desc = could not find container \"c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5\": container with ID starting with c61d81b02bf23d007f0c41bbaf9a8b8f16f2517ddf237f038da7912785c375a5 not found: ID does not exist" Feb 02 09:22:02 crc kubenswrapper[4720]: I0202 09:22:02.903439 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" path="/var/lib/kubelet/pods/bda17c81-c795-4209-9f7d-be57fbbe47e6/volumes" Feb 02 09:22:16 crc kubenswrapper[4720]: I0202 09:22:16.967092 4720 scope.go:117] "RemoveContainer" containerID="c981fe18a1377ea6a437d45330dc0b7963cc7073d1d2b4a26b838e1dbc6a75e1" Feb 02 09:22:16 crc kubenswrapper[4720]: I0202 09:22:16.994552 4720 scope.go:117] "RemoveContainer" containerID="a998e4cbb2fa3bb6c533bcc701c44070e3c879ba38cd8a4b6b970daaeb39c7ae" Feb 02 09:22:17 crc kubenswrapper[4720]: I0202 09:22:17.012133 4720 scope.go:117] "RemoveContainer" containerID="0efd010adde000065c9453bd87f3e3c8b0aa22b6807f92c52330e4c3389b9017" Feb 02 09:23:17 crc kubenswrapper[4720]: I0202 09:23:17.135137 4720 scope.go:117] "RemoveContainer" containerID="4449717031a7014cf359eccae2c6706c30073c4d22cfcdfa9c79f07e6435855b" Feb 02 09:23:17 crc kubenswrapper[4720]: I0202 09:23:17.163062 4720 scope.go:117] "RemoveContainer" containerID="8a29434f0aef5c00f3dd1e472b797256aa5382e06d2a3c6743df8527264ea631" Feb 02 09:23:17 crc kubenswrapper[4720]: I0202 09:23:17.187559 4720 scope.go:117] "RemoveContainer" containerID="376e9ad842a14a671e1a0e1441057b751a8dbf37dc7b59e73bc8401e27de8814" Feb 02 09:23:17 crc kubenswrapper[4720]: I0202 09:23:17.210342 4720 scope.go:117] "RemoveContainer" containerID="2e3bf9af39dd7fb3d5d5dee060ab1cec7bfe07407534286937d75e11269ab7a6" Feb 02 09:23:17 crc kubenswrapper[4720]: I0202 09:23:17.901602 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:23:17 crc kubenswrapper[4720]: I0202 09:23:17.902136 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:23:45 crc kubenswrapper[4720]: I0202 09:23:45.597275 4720 generic.go:334] "Generic (PLEG): container finished" podID="f616d658-9ec0-457b-a76a-fd6035250f16" containerID="ede432b472ff42585ceccf6516aa932dc565f1eb5944bf6e67f57c962703f83c" exitCode=0 Feb 02 09:23:45 crc kubenswrapper[4720]: I0202 09:23:45.597375 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" event={"ID":"f616d658-9ec0-457b-a76a-fd6035250f16","Type":"ContainerDied","Data":"ede432b472ff42585ceccf6516aa932dc565f1eb5944bf6e67f57c962703f83c"} Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.155662 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.244402 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-bootstrap-combined-ca-bundle\") pod \"f616d658-9ec0-457b-a76a-fd6035250f16\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.244484 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-ssh-key-openstack-edpm-ipam\") pod \"f616d658-9ec0-457b-a76a-fd6035250f16\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.244506 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory\") pod \"f616d658-9ec0-457b-a76a-fd6035250f16\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.244592 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2j6t\" (UniqueName: \"kubernetes.io/projected/f616d658-9ec0-457b-a76a-fd6035250f16-kube-api-access-d2j6t\") pod \"f616d658-9ec0-457b-a76a-fd6035250f16\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.250266 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f616d658-9ec0-457b-a76a-fd6035250f16" (UID: "f616d658-9ec0-457b-a76a-fd6035250f16"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.251938 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f616d658-9ec0-457b-a76a-fd6035250f16-kube-api-access-d2j6t" (OuterVolumeSpecName: "kube-api-access-d2j6t") pod "f616d658-9ec0-457b-a76a-fd6035250f16" (UID: "f616d658-9ec0-457b-a76a-fd6035250f16"). InnerVolumeSpecName "kube-api-access-d2j6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:23:47 crc kubenswrapper[4720]: E0202 09:23:47.271024 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory podName:f616d658-9ec0-457b-a76a-fd6035250f16 nodeName:}" failed. No retries permitted until 2026-02-02 09:23:47.770995883 +0000 UTC m=+1661.626621449 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory") pod "f616d658-9ec0-457b-a76a-fd6035250f16" (UID: "f616d658-9ec0-457b-a76a-fd6035250f16") : error deleting /var/lib/kubelet/pods/f616d658-9ec0-457b-a76a-fd6035250f16/volume-subpaths: remove /var/lib/kubelet/pods/f616d658-9ec0-457b-a76a-fd6035250f16/volume-subpaths: no such file or directory Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.274088 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f616d658-9ec0-457b-a76a-fd6035250f16" (UID: "f616d658-9ec0-457b-a76a-fd6035250f16"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.347847 4720 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.347922 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.347945 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2j6t\" (UniqueName: \"kubernetes.io/projected/f616d658-9ec0-457b-a76a-fd6035250f16-kube-api-access-d2j6t\") on node \"crc\" DevicePath \"\"" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.660302 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" event={"ID":"f616d658-9ec0-457b-a76a-fd6035250f16","Type":"ContainerDied","Data":"73d6853906fedb34ddfde4e8e168fae326b093e8b5eee236913fd1bdf9a4abb3"} Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.660349 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73d6853906fedb34ddfde4e8e168fae326b093e8b5eee236913fd1bdf9a4abb3" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.660407 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.722947 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv"] Feb 02 09:23:47 crc kubenswrapper[4720]: E0202 09:23:47.723396 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f616d658-9ec0-457b-a76a-fd6035250f16" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.723417 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f616d658-9ec0-457b-a76a-fd6035250f16" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 09:23:47 crc kubenswrapper[4720]: E0202 09:23:47.723429 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="extract-content" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.723437 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="extract-content" Feb 02 09:23:47 crc kubenswrapper[4720]: E0202 09:23:47.723479 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="extract-utilities" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.723487 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="extract-utilities" Feb 02 09:23:47 crc kubenswrapper[4720]: E0202 09:23:47.723505 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="registry-server" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.723513 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="registry-server" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.724017 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f616d658-9ec0-457b-a76a-fd6035250f16" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.724037 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda17c81-c795-4209-9f7d-be57fbbe47e6" containerName="registry-server" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.724810 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.741318 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv"] Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.772137 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory\") pod \"f616d658-9ec0-457b-a76a-fd6035250f16\" (UID: \"f616d658-9ec0-457b-a76a-fd6035250f16\") " Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.777017 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory" (OuterVolumeSpecName: "inventory") pod "f616d658-9ec0-457b-a76a-fd6035250f16" (UID: "f616d658-9ec0-457b-a76a-fd6035250f16"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.874006 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.874378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsm9m\" (UniqueName: \"kubernetes.io/projected/0d48c45d-435e-4bff-947d-8bddd768de55-kube-api-access-qsm9m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.874512 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.875236 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f616d658-9ec0-457b-a76a-fd6035250f16-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.901382 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.901608 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.978244 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.978810 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsm9m\" (UniqueName: \"kubernetes.io/projected/0d48c45d-435e-4bff-947d-8bddd768de55-kube-api-access-qsm9m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.979982 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.984730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:47 crc kubenswrapper[4720]: I0202 09:23:47.985754 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:48 crc kubenswrapper[4720]: I0202 09:23:48.004272 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsm9m\" (UniqueName: \"kubernetes.io/projected/0d48c45d-435e-4bff-947d-8bddd768de55-kube-api-access-qsm9m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:48 crc kubenswrapper[4720]: I0202 09:23:48.049504 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:23:48 crc kubenswrapper[4720]: I0202 09:23:48.666947 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv"] Feb 02 09:23:49 crc kubenswrapper[4720]: I0202 09:23:49.694462 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" event={"ID":"0d48c45d-435e-4bff-947d-8bddd768de55","Type":"ContainerStarted","Data":"df1d5137f29a7489b810239924b42fe75a9713fd37cbd95cbf1f2d912e119452"} Feb 02 09:23:49 crc kubenswrapper[4720]: I0202 09:23:49.694968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" event={"ID":"0d48c45d-435e-4bff-947d-8bddd768de55","Type":"ContainerStarted","Data":"20eff69e99099eb33cd90546098b7d9b7bb82b6a777d467be044e22c7b0517a5"} Feb 02 09:23:49 crc kubenswrapper[4720]: I0202 09:23:49.729205 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" podStartSLOduration=2.229641915 podStartE2EDuration="2.729096203s" podCreationTimestamp="2026-02-02 09:23:47 +0000 UTC" firstStartedPulling="2026-02-02 09:23:48.679371603 +0000 UTC m=+1662.534997159" lastFinishedPulling="2026-02-02 09:23:49.178825881 +0000 UTC m=+1663.034451447" observedRunningTime="2026-02-02 09:23:49.719231714 +0000 UTC m=+1663.574857310" watchObservedRunningTime="2026-02-02 09:23:49.729096203 +0000 UTC m=+1663.584721809" Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.293121 4720 scope.go:117] "RemoveContainer" containerID="0af1b9479539086cb94324d297b5c683b9a5d63cda81ae55ad754969030ef4d2" Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.324288 4720 scope.go:117] "RemoveContainer" containerID="d1bcedfb660ee9ddf9cc8160561f7473fc5261047292c34c066d12817f6a8529" Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.901721 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.902275 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.902346 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.903536 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:24:17 crc kubenswrapper[4720]: I0202 09:24:17.903643 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" gracePeriod=600 Feb 02 09:24:18 crc kubenswrapper[4720]: E0202 09:24:18.037817 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:24:18 crc kubenswrapper[4720]: I0202 09:24:18.096168 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" exitCode=0 Feb 02 09:24:18 crc kubenswrapper[4720]: I0202 09:24:18.096229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97"} Feb 02 09:24:18 crc kubenswrapper[4720]: I0202 09:24:18.096269 4720 scope.go:117] "RemoveContainer" containerID="ab31c90e1e148f73f162e2be60fd4d3028bdf40b46acc10afc7a7e25161d4a04" Feb 02 09:24:18 crc kubenswrapper[4720]: I0202 09:24:18.097072 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:24:18 crc kubenswrapper[4720]: E0202 09:24:18.097356 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:24:30 crc kubenswrapper[4720]: I0202 09:24:30.887817 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:24:30 crc kubenswrapper[4720]: E0202 09:24:30.888727 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:24:41 crc kubenswrapper[4720]: I0202 09:24:41.886981 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:24:41 crc kubenswrapper[4720]: E0202 09:24:41.887776 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:24:54 crc kubenswrapper[4720]: I0202 09:24:54.887515 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:24:54 crc kubenswrapper[4720]: E0202 09:24:54.889871 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:25:09 crc kubenswrapper[4720]: I0202 09:25:09.041639 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7d25d"] Feb 02 09:25:09 crc kubenswrapper[4720]: I0202 09:25:09.055418 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7d25d"] Feb 02 09:25:09 crc kubenswrapper[4720]: I0202 09:25:09.887708 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:25:09 crc kubenswrapper[4720]: E0202 09:25:09.888212 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:25:10 crc kubenswrapper[4720]: I0202 09:25:10.906013 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3389dc-d691-4727-966f-38f108bd6309" path="/var/lib/kubelet/pods/8c3389dc-d691-4727-966f-38f108bd6309/volumes" Feb 02 09:25:11 crc kubenswrapper[4720]: I0202 09:25:11.049643 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1d0d-account-create-update-6xwpp"] Feb 02 09:25:11 crc kubenswrapper[4720]: I0202 09:25:11.079025 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4ac9-account-create-update-tdwhd"] Feb 02 09:25:11 crc kubenswrapper[4720]: I0202 09:25:11.099215 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zxj6c"] Feb 02 09:25:11 crc kubenswrapper[4720]: I0202 09:25:11.114436 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4ac9-account-create-update-tdwhd"] Feb 02 09:25:11 crc kubenswrapper[4720]: I0202 09:25:11.121953 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1d0d-account-create-update-6xwpp"] Feb 02 09:25:11 crc kubenswrapper[4720]: I0202 09:25:11.128837 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zxj6c"] Feb 02 09:25:12 crc kubenswrapper[4720]: I0202 09:25:12.899944 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966b44cd-7bad-4ec3-b906-16a88bd144b2" path="/var/lib/kubelet/pods/966b44cd-7bad-4ec3-b906-16a88bd144b2/volumes" Feb 02 09:25:12 crc kubenswrapper[4720]: I0202 09:25:12.901209 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af4aafe2-8ad5-43c7-b929-2c357e58ff01" path="/var/lib/kubelet/pods/af4aafe2-8ad5-43c7-b929-2c357e58ff01/volumes" Feb 02 09:25:12 crc kubenswrapper[4720]: I0202 09:25:12.902189 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff441e88-a0ed-4b80-80d9-32ee9885ad6a" path="/var/lib/kubelet/pods/ff441e88-a0ed-4b80-80d9-32ee9885ad6a/volumes" Feb 02 09:25:14 crc kubenswrapper[4720]: I0202 09:25:14.708255 4720 generic.go:334] "Generic (PLEG): container finished" podID="0d48c45d-435e-4bff-947d-8bddd768de55" containerID="df1d5137f29a7489b810239924b42fe75a9713fd37cbd95cbf1f2d912e119452" exitCode=0 Feb 02 09:25:14 crc kubenswrapper[4720]: I0202 09:25:14.708346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" event={"ID":"0d48c45d-435e-4bff-947d-8bddd768de55","Type":"ContainerDied","Data":"df1d5137f29a7489b810239924b42fe75a9713fd37cbd95cbf1f2d912e119452"} Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.066492 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bde2-account-create-update-vjhgn"] Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.075931 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8mvjv"] Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.084143 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8mvjv"] Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.091963 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bde2-account-create-update-vjhgn"] Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.219760 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.359477 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsm9m\" (UniqueName: \"kubernetes.io/projected/0d48c45d-435e-4bff-947d-8bddd768de55-kube-api-access-qsm9m\") pod \"0d48c45d-435e-4bff-947d-8bddd768de55\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.359536 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-inventory\") pod \"0d48c45d-435e-4bff-947d-8bddd768de55\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.359607 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-ssh-key-openstack-edpm-ipam\") pod \"0d48c45d-435e-4bff-947d-8bddd768de55\" (UID: \"0d48c45d-435e-4bff-947d-8bddd768de55\") " Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.369083 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d48c45d-435e-4bff-947d-8bddd768de55-kube-api-access-qsm9m" (OuterVolumeSpecName: "kube-api-access-qsm9m") pod "0d48c45d-435e-4bff-947d-8bddd768de55" (UID: "0d48c45d-435e-4bff-947d-8bddd768de55"). InnerVolumeSpecName "kube-api-access-qsm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.388862 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-inventory" (OuterVolumeSpecName: "inventory") pod "0d48c45d-435e-4bff-947d-8bddd768de55" (UID: "0d48c45d-435e-4bff-947d-8bddd768de55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.393087 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d48c45d-435e-4bff-947d-8bddd768de55" (UID: "0d48c45d-435e-4bff-947d-8bddd768de55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.462598 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsm9m\" (UniqueName: \"kubernetes.io/projected/0d48c45d-435e-4bff-947d-8bddd768de55-kube-api-access-qsm9m\") on node \"crc\" DevicePath \"\"" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.462664 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.462684 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d48c45d-435e-4bff-947d-8bddd768de55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.736619 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" event={"ID":"0d48c45d-435e-4bff-947d-8bddd768de55","Type":"ContainerDied","Data":"20eff69e99099eb33cd90546098b7d9b7bb82b6a777d467be044e22c7b0517a5"} Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.736672 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.736688 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20eff69e99099eb33cd90546098b7d9b7bb82b6a777d467be044e22c7b0517a5" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.849039 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb"] Feb 02 09:25:16 crc kubenswrapper[4720]: E0202 09:25:16.849535 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d48c45d-435e-4bff-947d-8bddd768de55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.849555 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d48c45d-435e-4bff-947d-8bddd768de55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.849773 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d48c45d-435e-4bff-947d-8bddd768de55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.850564 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.853215 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.853502 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.855188 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.856337 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.919015 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d0d455-b8c0-4bc5-9f79-5050021d55bc" path="/var/lib/kubelet/pods/04d0d455-b8c0-4bc5-9f79-5050021d55bc/volumes" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.919796 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2813d031-5b81-42b0-82bd-9ef9dc55a7aa" path="/var/lib/kubelet/pods/2813d031-5b81-42b0-82bd-9ef9dc55a7aa/volumes" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.920502 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb"] Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.981549 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.981682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:16 crc kubenswrapper[4720]: I0202 09:25:16.981792 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cps\" (UniqueName: \"kubernetes.io/projected/a980c334-6351-4282-abd8-5be6adfd3b79-kube-api-access-42cps\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.024634 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t585f"] Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.035001 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t585f"] Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.082952 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cps\" (UniqueName: \"kubernetes.io/projected/a980c334-6351-4282-abd8-5be6adfd3b79-kube-api-access-42cps\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.083063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.083115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.087831 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.088688 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.100742 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cps\" (UniqueName: \"kubernetes.io/projected/a980c334-6351-4282-abd8-5be6adfd3b79-kube-api-access-42cps\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.185354 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.423130 4720 scope.go:117] "RemoveContainer" containerID="e859681590e0432a59ce41f20057db9f956a7378399fbbe6711d2e6bb86d1b7e" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.451272 4720 scope.go:117] "RemoveContainer" containerID="6d25fb036c59c2f014a67479e45ca13133f3db3e68f0efca7721568d4a98d053" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.506683 4720 scope.go:117] "RemoveContainer" containerID="306b86e68c2d10e02819653f56e38eddf0db63c44b06064d9817809e8a604845" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.680760 4720 scope.go:117] "RemoveContainer" containerID="25bbc41f91b78b006b9bb53474e02a04dcc7ad1a95e71477286dea630a2d527d" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.716911 4720 scope.go:117] "RemoveContainer" containerID="56d991376cc9e7ecccb9a4da327e8a0dbacf60a09c5c0a199f95a552387b0524" Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.776144 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb"] Feb 02 09:25:17 crc kubenswrapper[4720]: I0202 09:25:17.825611 4720 scope.go:117] "RemoveContainer" containerID="6c4d5ea96bab908e8b4f5ba7b0b91d4eff32b38adecec74a522676f253d87e31" Feb 02 09:25:18 crc kubenswrapper[4720]: I0202 09:25:18.788374 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" event={"ID":"a980c334-6351-4282-abd8-5be6adfd3b79","Type":"ContainerStarted","Data":"1d99e6f23c5fa67481c37eb07dedf35281e7ff633a9483bde4d8a41745cda2d6"} Feb 02 09:25:18 crc kubenswrapper[4720]: I0202 09:25:18.788837 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" event={"ID":"a980c334-6351-4282-abd8-5be6adfd3b79","Type":"ContainerStarted","Data":"3954d0d3444cdb733c8f4b29fed47996d09d6a3f1c1b8f66d168139a55951dd6"} Feb 02 09:25:18 crc kubenswrapper[4720]: I0202 09:25:18.821308 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" podStartSLOduration=2.326430883 podStartE2EDuration="2.821287005s" podCreationTimestamp="2026-02-02 09:25:16 +0000 UTC" firstStartedPulling="2026-02-02 09:25:17.83974839 +0000 UTC m=+1751.695373946" lastFinishedPulling="2026-02-02 09:25:18.334604502 +0000 UTC m=+1752.190230068" observedRunningTime="2026-02-02 09:25:18.812964092 +0000 UTC m=+1752.668589678" watchObservedRunningTime="2026-02-02 09:25:18.821287005 +0000 UTC m=+1752.676912571" Feb 02 09:25:18 crc kubenswrapper[4720]: I0202 09:25:18.900909 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a687a1-7597-4207-b881-e2873c4b2f33" path="/var/lib/kubelet/pods/f5a687a1-7597-4207-b881-e2873c4b2f33/volumes" Feb 02 09:25:23 crc kubenswrapper[4720]: I0202 09:25:23.888322 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:25:23 crc kubenswrapper[4720]: E0202 09:25:23.889237 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:25:35 crc kubenswrapper[4720]: I0202 09:25:35.887906 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:25:35 crc kubenswrapper[4720]: E0202 09:25:35.888727 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.043398 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-dgs6k"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.056915 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6b58q"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.072382 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc5a-account-create-update-vsv48"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.082492 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8d04-account-create-update-mh57v"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.094338 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fbxwd"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.107970 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-dgs6k"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.115312 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cc5a-account-create-update-vsv48"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.127619 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8d04-account-create-update-mh57v"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.136960 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6b58q"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.145111 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fbxwd"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.155483 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ffb2-account-create-update-8kngw"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.170047 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-m26nr"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.176996 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-1169-account-create-update-qsh8m"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.186583 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-m26nr"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.214120 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-1169-account-create-update-qsh8m"] Feb 02 09:25:37 crc kubenswrapper[4720]: I0202 09:25:37.223381 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ffb2-account-create-update-8kngw"] Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.899342 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12555194-f017-4145-a0cf-8f9369bdaa76" path="/var/lib/kubelet/pods/12555194-f017-4145-a0cf-8f9369bdaa76/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.900294 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3417648f-9a90-4897-87ab-0131b5906201" path="/var/lib/kubelet/pods/3417648f-9a90-4897-87ab-0131b5906201/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.901350 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3444a48e-b0df-47ec-b6d8-a43708d1f84a" path="/var/lib/kubelet/pods/3444a48e-b0df-47ec-b6d8-a43708d1f84a/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.901968 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383c4f2b-6f59-45a2-a121-f1e94f555a96" path="/var/lib/kubelet/pods/383c4f2b-6f59-45a2-a121-f1e94f555a96/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.903405 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fff2609-d43b-4174-bdc2-cdab850baf7e" path="/var/lib/kubelet/pods/3fff2609-d43b-4174-bdc2-cdab850baf7e/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.904149 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8658d1c-5f58-4e0a-af31-7e87b7843e8e" path="/var/lib/kubelet/pods/c8658d1c-5f58-4e0a-af31-7e87b7843e8e/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.904816 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18e396c-f47c-4be7-8ca8-c5ff31393401" path="/var/lib/kubelet/pods/e18e396c-f47c-4be7-8ca8-c5ff31393401/volumes" Feb 02 09:25:38 crc kubenswrapper[4720]: I0202 09:25:38.906137 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2450cc2-ff6b-4827-a81c-3dc7a69854b0" path="/var/lib/kubelet/pods/f2450cc2-ff6b-4827-a81c-3dc7a69854b0/volumes" Feb 02 09:25:40 crc kubenswrapper[4720]: I0202 09:25:40.030580 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dcm6l"] Feb 02 09:25:40 crc kubenswrapper[4720]: I0202 09:25:40.040915 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dcm6l"] Feb 02 09:25:40 crc kubenswrapper[4720]: I0202 09:25:40.898342 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd226b95-5b7d-4a56-a605-e63267494899" path="/var/lib/kubelet/pods/bd226b95-5b7d-4a56-a605-e63267494899/volumes" Feb 02 09:25:41 crc kubenswrapper[4720]: I0202 09:25:41.037997 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-cn7l9"] Feb 02 09:25:41 crc kubenswrapper[4720]: I0202 09:25:41.046811 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-cn7l9"] Feb 02 09:25:42 crc kubenswrapper[4720]: I0202 09:25:42.897815 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bce2adf-98dc-4eb6-90e3-c2956976b371" path="/var/lib/kubelet/pods/0bce2adf-98dc-4eb6-90e3-c2956976b371/volumes" Feb 02 09:25:46 crc kubenswrapper[4720]: I0202 09:25:46.894831 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:25:46 crc kubenswrapper[4720]: E0202 09:25:46.895600 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:25:58 crc kubenswrapper[4720]: I0202 09:25:58.886840 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:25:58 crc kubenswrapper[4720]: E0202 09:25:58.887782 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:26:11 crc kubenswrapper[4720]: I0202 09:26:11.887248 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:26:11 crc kubenswrapper[4720]: E0202 09:26:11.888545 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:26:12 crc kubenswrapper[4720]: I0202 09:26:12.064552 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6zn5f"] Feb 02 09:26:12 crc kubenswrapper[4720]: I0202 09:26:12.079658 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6zn5f"] Feb 02 09:26:12 crc kubenswrapper[4720]: I0202 09:26:12.909630 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4af37e-f6d7-4a2a-acf1-82ed860df8f2" path="/var/lib/kubelet/pods/ce4af37e-f6d7-4a2a-acf1-82ed860df8f2/volumes" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.033480 4720 scope.go:117] "RemoveContainer" containerID="7d80b9b8c2cddd6e834c7769c404fac652bb0b1340175f23953c46b703fe771a" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.104534 4720 scope.go:117] "RemoveContainer" containerID="7308ba5de3fa9560e5bff6fbf79475063851b5d283205da6e4ffa0ca38c0f4c6" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.170206 4720 scope.go:117] "RemoveContainer" containerID="2170f92bffcf28092d3ec6dd9e584f0254423cd7c1eb77c02d73b9575a0eefc9" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.216994 4720 scope.go:117] "RemoveContainer" containerID="1982e55fe5154815513d133addf473bef686630793bf9a2f0f6734cf04e8d56c" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.253840 4720 scope.go:117] "RemoveContainer" containerID="ad2ffeb17c52f830cd9a6b29f456a54fc38186d37726cefbead6ea36b126f749" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.319935 4720 scope.go:117] "RemoveContainer" containerID="df0178473bd7bec51568932c723d57e1002628583d84c81def4d9b858140a0fe" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.346319 4720 scope.go:117] "RemoveContainer" containerID="fbf3442a05db1e70751168b74583e6d81ee0ae849f23e44081e26651c55e6746" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.365308 4720 scope.go:117] "RemoveContainer" containerID="f9740ccb223a5ad43718db224ed2c1a04ff244269b3ee07e62f33ed7da41deb8" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.392279 4720 scope.go:117] "RemoveContainer" containerID="961f1649a2e34151d13896f96740f2da013274c18538a28f50d00e89e9ca604c" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.411455 4720 scope.go:117] "RemoveContainer" containerID="8665e8c1444eb7ed2e71f0bd7d0f8387ab49bc92254adbcfaed206b7e62a2637" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.434279 4720 scope.go:117] "RemoveContainer" containerID="0d0105bb311a8924dec62239afedbb1f56df6f4e899adeb0c02530d7be02a382" Feb 02 09:26:18 crc kubenswrapper[4720]: I0202 09:26:18.456055 4720 scope.go:117] "RemoveContainer" containerID="a7af5c92d927bacfaa0c3f588c54784a78dd452735b0432ad4a43d042501daf7" Feb 02 09:26:20 crc kubenswrapper[4720]: I0202 09:26:20.067293 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j9h6k"] Feb 02 09:26:20 crc kubenswrapper[4720]: I0202 09:26:20.081651 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j9h6k"] Feb 02 09:26:20 crc kubenswrapper[4720]: I0202 09:26:20.906190 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf88a12-cd68-4b5c-a7b1-ad649a75791e" path="/var/lib/kubelet/pods/3cf88a12-cd68-4b5c-a7b1-ad649a75791e/volumes" Feb 02 09:26:22 crc kubenswrapper[4720]: I0202 09:26:22.441231 4720 generic.go:334] "Generic (PLEG): container finished" podID="a980c334-6351-4282-abd8-5be6adfd3b79" containerID="1d99e6f23c5fa67481c37eb07dedf35281e7ff633a9483bde4d8a41745cda2d6" exitCode=0 Feb 02 09:26:22 crc kubenswrapper[4720]: I0202 09:26:22.441375 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" event={"ID":"a980c334-6351-4282-abd8-5be6adfd3b79","Type":"ContainerDied","Data":"1d99e6f23c5fa67481c37eb07dedf35281e7ff633a9483bde4d8a41745cda2d6"} Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.046241 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mz5n2"] Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.062762 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mz5n2"] Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.839478 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.887614 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:26:23 crc kubenswrapper[4720]: E0202 09:26:23.887941 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.947999 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-ssh-key-openstack-edpm-ipam\") pod \"a980c334-6351-4282-abd8-5be6adfd3b79\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.948051 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42cps\" (UniqueName: \"kubernetes.io/projected/a980c334-6351-4282-abd8-5be6adfd3b79-kube-api-access-42cps\") pod \"a980c334-6351-4282-abd8-5be6adfd3b79\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.948124 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-inventory\") pod \"a980c334-6351-4282-abd8-5be6adfd3b79\" (UID: \"a980c334-6351-4282-abd8-5be6adfd3b79\") " Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.955178 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a980c334-6351-4282-abd8-5be6adfd3b79-kube-api-access-42cps" (OuterVolumeSpecName: "kube-api-access-42cps") pod "a980c334-6351-4282-abd8-5be6adfd3b79" (UID: "a980c334-6351-4282-abd8-5be6adfd3b79"). InnerVolumeSpecName "kube-api-access-42cps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.976325 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a980c334-6351-4282-abd8-5be6adfd3b79" (UID: "a980c334-6351-4282-abd8-5be6adfd3b79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:26:23 crc kubenswrapper[4720]: I0202 09:26:23.976351 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-inventory" (OuterVolumeSpecName: "inventory") pod "a980c334-6351-4282-abd8-5be6adfd3b79" (UID: "a980c334-6351-4282-abd8-5be6adfd3b79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.050795 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.051154 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42cps\" (UniqueName: \"kubernetes.io/projected/a980c334-6351-4282-abd8-5be6adfd3b79-kube-api-access-42cps\") on node \"crc\" DevicePath \"\"" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.051168 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a980c334-6351-4282-abd8-5be6adfd3b79-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.459569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" event={"ID":"a980c334-6351-4282-abd8-5be6adfd3b79","Type":"ContainerDied","Data":"3954d0d3444cdb733c8f4b29fed47996d09d6a3f1c1b8f66d168139a55951dd6"} Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.459613 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3954d0d3444cdb733c8f4b29fed47996d09d6a3f1c1b8f66d168139a55951dd6" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.459628 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.559094 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84"] Feb 02 09:26:24 crc kubenswrapper[4720]: E0202 09:26:24.559517 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a980c334-6351-4282-abd8-5be6adfd3b79" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.559537 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a980c334-6351-4282-abd8-5be6adfd3b79" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.559757 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a980c334-6351-4282-abd8-5be6adfd3b79" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.560473 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.562217 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.562730 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.562963 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.562963 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.570024 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84"] Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.661555 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.661803 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cn6x\" (UniqueName: \"kubernetes.io/projected/20241116-e310-4877-b6a3-c0c72b2470fd-kube-api-access-6cn6x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.661937 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.765315 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cn6x\" (UniqueName: \"kubernetes.io/projected/20241116-e310-4877-b6a3-c0c72b2470fd-kube-api-access-6cn6x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.765436 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.765685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.771833 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.772789 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.812507 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cn6x\" (UniqueName: \"kubernetes.io/projected/20241116-e310-4877-b6a3-c0c72b2470fd-kube-api-access-6cn6x\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fqw84\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.876657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:24 crc kubenswrapper[4720]: I0202 09:26:24.900437 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3af89e-0227-4cd5-a546-b9ef7ec514a7" path="/var/lib/kubelet/pods/db3af89e-0227-4cd5-a546-b9ef7ec514a7/volumes" Feb 02 09:26:25 crc kubenswrapper[4720]: W0202 09:26:25.450672 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20241116_e310_4877_b6a3_c0c72b2470fd.slice/crio-6ad269dc47c1c94c5d7d298387ac3e203f702758861f4d5647361d5261e5912f WatchSource:0}: Error finding container 6ad269dc47c1c94c5d7d298387ac3e203f702758861f4d5647361d5261e5912f: Status 404 returned error can't find the container with id 6ad269dc47c1c94c5d7d298387ac3e203f702758861f4d5647361d5261e5912f Feb 02 09:26:25 crc kubenswrapper[4720]: I0202 09:26:25.453338 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84"] Feb 02 09:26:25 crc kubenswrapper[4720]: I0202 09:26:25.469479 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" event={"ID":"20241116-e310-4877-b6a3-c0c72b2470fd","Type":"ContainerStarted","Data":"6ad269dc47c1c94c5d7d298387ac3e203f702758861f4d5647361d5261e5912f"} Feb 02 09:26:26 crc kubenswrapper[4720]: I0202 09:26:26.480767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" event={"ID":"20241116-e310-4877-b6a3-c0c72b2470fd","Type":"ContainerStarted","Data":"7d66298b0c4aad065f7de2fa2ee9aaefb4f9f804fd3e55b950c898962cccbea6"} Feb 02 09:26:26 crc kubenswrapper[4720]: I0202 09:26:26.511486 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" podStartSLOduration=1.96910041 podStartE2EDuration="2.511462518s" podCreationTimestamp="2026-02-02 09:26:24 +0000 UTC" firstStartedPulling="2026-02-02 09:26:25.45255988 +0000 UTC m=+1819.308185446" lastFinishedPulling="2026-02-02 09:26:25.994921998 +0000 UTC m=+1819.850547554" observedRunningTime="2026-02-02 09:26:26.4979887 +0000 UTC m=+1820.353614256" watchObservedRunningTime="2026-02-02 09:26:26.511462518 +0000 UTC m=+1820.367088094" Feb 02 09:26:31 crc kubenswrapper[4720]: I0202 09:26:31.534573 4720 generic.go:334] "Generic (PLEG): container finished" podID="20241116-e310-4877-b6a3-c0c72b2470fd" containerID="7d66298b0c4aad065f7de2fa2ee9aaefb4f9f804fd3e55b950c898962cccbea6" exitCode=0 Feb 02 09:26:31 crc kubenswrapper[4720]: I0202 09:26:31.534620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" event={"ID":"20241116-e310-4877-b6a3-c0c72b2470fd","Type":"ContainerDied","Data":"7d66298b0c4aad065f7de2fa2ee9aaefb4f9f804fd3e55b950c898962cccbea6"} Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.045963 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.138452 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-inventory\") pod \"20241116-e310-4877-b6a3-c0c72b2470fd\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.138972 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-ssh-key-openstack-edpm-ipam\") pod \"20241116-e310-4877-b6a3-c0c72b2470fd\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.139109 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cn6x\" (UniqueName: \"kubernetes.io/projected/20241116-e310-4877-b6a3-c0c72b2470fd-kube-api-access-6cn6x\") pod \"20241116-e310-4877-b6a3-c0c72b2470fd\" (UID: \"20241116-e310-4877-b6a3-c0c72b2470fd\") " Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.144148 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20241116-e310-4877-b6a3-c0c72b2470fd-kube-api-access-6cn6x" (OuterVolumeSpecName: "kube-api-access-6cn6x") pod "20241116-e310-4877-b6a3-c0c72b2470fd" (UID: "20241116-e310-4877-b6a3-c0c72b2470fd"). InnerVolumeSpecName "kube-api-access-6cn6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.176257 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "20241116-e310-4877-b6a3-c0c72b2470fd" (UID: "20241116-e310-4877-b6a3-c0c72b2470fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.180388 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-inventory" (OuterVolumeSpecName: "inventory") pod "20241116-e310-4877-b6a3-c0c72b2470fd" (UID: "20241116-e310-4877-b6a3-c0c72b2470fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.241545 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.241583 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20241116-e310-4877-b6a3-c0c72b2470fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.241596 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cn6x\" (UniqueName: \"kubernetes.io/projected/20241116-e310-4877-b6a3-c0c72b2470fd-kube-api-access-6cn6x\") on node \"crc\" DevicePath \"\"" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.563362 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" event={"ID":"20241116-e310-4877-b6a3-c0c72b2470fd","Type":"ContainerDied","Data":"6ad269dc47c1c94c5d7d298387ac3e203f702758861f4d5647361d5261e5912f"} Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.563418 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad269dc47c1c94c5d7d298387ac3e203f702758861f4d5647361d5261e5912f" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.563420 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fqw84" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.657187 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd"] Feb 02 09:26:33 crc kubenswrapper[4720]: E0202 09:26:33.657604 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20241116-e310-4877-b6a3-c0c72b2470fd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.657627 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="20241116-e310-4877-b6a3-c0c72b2470fd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.657867 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="20241116-e310-4877-b6a3-c0c72b2470fd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.658690 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.661042 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.661608 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.661661 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.661620 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.668105 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd"] Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.753188 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.753392 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.753510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmjt6\" (UniqueName: \"kubernetes.io/projected/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-kube-api-access-lmjt6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.855953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.856149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.856255 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmjt6\" (UniqueName: \"kubernetes.io/projected/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-kube-api-access-lmjt6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.861105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.861245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.878078 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmjt6\" (UniqueName: \"kubernetes.io/projected/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-kube-api-access-lmjt6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cn2qd\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:33 crc kubenswrapper[4720]: I0202 09:26:33.984555 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:26:34 crc kubenswrapper[4720]: I0202 09:26:34.046532 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vn2mf"] Feb 02 09:26:34 crc kubenswrapper[4720]: I0202 09:26:34.062809 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vn2mf"] Feb 02 09:26:34 crc kubenswrapper[4720]: I0202 09:26:34.603827 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd"] Feb 02 09:26:34 crc kubenswrapper[4720]: I0202 09:26:34.898671 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1890e68-1a9c-4180-b989-6e178510e23b" path="/var/lib/kubelet/pods/a1890e68-1a9c-4180-b989-6e178510e23b/volumes" Feb 02 09:26:35 crc kubenswrapper[4720]: I0202 09:26:35.586338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" event={"ID":"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe","Type":"ContainerStarted","Data":"7f9089a0997834220d461cfe9c29d35b3f94ec6f0968a258ec2ec99a3d21c843"} Feb 02 09:26:35 crc kubenswrapper[4720]: I0202 09:26:35.586755 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" event={"ID":"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe","Type":"ContainerStarted","Data":"df96a6091889969f9241f02886ad563d7db8ca55f2d9aec8027bf4a2d514ac5c"} Feb 02 09:26:35 crc kubenswrapper[4720]: I0202 09:26:35.612974 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" podStartSLOduration=2.164163643 podStartE2EDuration="2.612953194s" podCreationTimestamp="2026-02-02 09:26:33 +0000 UTC" firstStartedPulling="2026-02-02 09:26:34.60884968 +0000 UTC m=+1828.464475246" lastFinishedPulling="2026-02-02 09:26:35.057639241 +0000 UTC m=+1828.913264797" observedRunningTime="2026-02-02 09:26:35.60659827 +0000 UTC m=+1829.462223866" watchObservedRunningTime="2026-02-02 09:26:35.612953194 +0000 UTC m=+1829.468578750" Feb 02 09:26:36 crc kubenswrapper[4720]: I0202 09:26:36.037447 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mcqm2"] Feb 02 09:26:36 crc kubenswrapper[4720]: I0202 09:26:36.054334 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mcqm2"] Feb 02 09:26:36 crc kubenswrapper[4720]: I0202 09:26:36.897355 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:26:36 crc kubenswrapper[4720]: E0202 09:26:36.898061 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:26:36 crc kubenswrapper[4720]: I0202 09:26:36.911444 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691b5691-2178-4f8e-a40c-7dfe5bec0f1b" path="/var/lib/kubelet/pods/691b5691-2178-4f8e-a40c-7dfe5bec0f1b/volumes" Feb 02 09:26:42 crc kubenswrapper[4720]: I0202 09:26:42.030328 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-jspmg"] Feb 02 09:26:42 crc kubenswrapper[4720]: I0202 09:26:42.037253 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-jspmg"] Feb 02 09:26:42 crc kubenswrapper[4720]: I0202 09:26:42.899751 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a624e5d-098a-44e1-95b7-fa398979891a" path="/var/lib/kubelet/pods/1a624e5d-098a-44e1-95b7-fa398979891a/volumes" Feb 02 09:26:50 crc kubenswrapper[4720]: I0202 09:26:50.887423 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:26:50 crc kubenswrapper[4720]: E0202 09:26:50.888117 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:27:03 crc kubenswrapper[4720]: I0202 09:27:03.887321 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:27:03 crc kubenswrapper[4720]: E0202 09:27:03.888691 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:27:09 crc kubenswrapper[4720]: I0202 09:27:09.369043 4720 generic.go:334] "Generic (PLEG): container finished" podID="f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" containerID="7f9089a0997834220d461cfe9c29d35b3f94ec6f0968a258ec2ec99a3d21c843" exitCode=0 Feb 02 09:27:09 crc kubenswrapper[4720]: I0202 09:27:09.369182 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" event={"ID":"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe","Type":"ContainerDied","Data":"7f9089a0997834220d461cfe9c29d35b3f94ec6f0968a258ec2ec99a3d21c843"} Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.838349 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.927842 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-ssh-key-openstack-edpm-ipam\") pod \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.928221 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-inventory\") pod \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.928409 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmjt6\" (UniqueName: \"kubernetes.io/projected/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-kube-api-access-lmjt6\") pod \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\" (UID: \"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe\") " Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.932895 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-kube-api-access-lmjt6" (OuterVolumeSpecName: "kube-api-access-lmjt6") pod "f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" (UID: "f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe"). InnerVolumeSpecName "kube-api-access-lmjt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.960002 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" (UID: "f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:27:10 crc kubenswrapper[4720]: I0202 09:27:10.961370 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-inventory" (OuterVolumeSpecName: "inventory") pod "f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" (UID: "f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.030602 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.030634 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.030643 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmjt6\" (UniqueName: \"kubernetes.io/projected/f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe-kube-api-access-lmjt6\") on node \"crc\" DevicePath \"\"" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.396193 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" event={"ID":"f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe","Type":"ContainerDied","Data":"df96a6091889969f9241f02886ad563d7db8ca55f2d9aec8027bf4a2d514ac5c"} Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.396253 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df96a6091889969f9241f02886ad563d7db8ca55f2d9aec8027bf4a2d514ac5c" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.396280 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cn2qd" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.557491 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9"] Feb 02 09:27:11 crc kubenswrapper[4720]: E0202 09:27:11.558058 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.558094 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.558359 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.559251 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.563295 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.563685 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.563871 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.564070 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.594298 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9"] Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.657221 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbgvh\" (UniqueName: \"kubernetes.io/projected/58baee1a-0156-461f-9be3-2a44ffedecdb-kube-api-access-jbgvh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.657326 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.657395 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.759085 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbgvh\" (UniqueName: \"kubernetes.io/projected/58baee1a-0156-461f-9be3-2a44ffedecdb-kube-api-access-jbgvh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.759225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.759296 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.768343 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.769169 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.779851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbgvh\" (UniqueName: \"kubernetes.io/projected/58baee1a-0156-461f-9be3-2a44ffedecdb-kube-api-access-jbgvh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:11 crc kubenswrapper[4720]: I0202 09:27:11.890743 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:27:12 crc kubenswrapper[4720]: I0202 09:27:12.469699 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9"] Feb 02 09:27:12 crc kubenswrapper[4720]: I0202 09:27:12.471122 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:27:13 crc kubenswrapper[4720]: I0202 09:27:13.416810 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" event={"ID":"58baee1a-0156-461f-9be3-2a44ffedecdb","Type":"ContainerStarted","Data":"a16d24e6ca03caba478d9d3a4d8f4b1b98542073a31b712b0feb7ae9cb512682"} Feb 02 09:27:13 crc kubenswrapper[4720]: I0202 09:27:13.417141 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" event={"ID":"58baee1a-0156-461f-9be3-2a44ffedecdb","Type":"ContainerStarted","Data":"eea8b6d4f98989f922f334e99a1a8beaeefa505bce1b52f14dae8322610a4cb7"} Feb 02 09:27:13 crc kubenswrapper[4720]: I0202 09:27:13.443165 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" podStartSLOduration=1.762425925 podStartE2EDuration="2.44314219s" podCreationTimestamp="2026-02-02 09:27:11 +0000 UTC" firstStartedPulling="2026-02-02 09:27:12.470826569 +0000 UTC m=+1866.326452125" lastFinishedPulling="2026-02-02 09:27:13.151542834 +0000 UTC m=+1867.007168390" observedRunningTime="2026-02-02 09:27:13.435385231 +0000 UTC m=+1867.291010797" watchObservedRunningTime="2026-02-02 09:27:13.44314219 +0000 UTC m=+1867.298767736" Feb 02 09:27:14 crc kubenswrapper[4720]: I0202 09:27:14.894763 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:27:14 crc kubenswrapper[4720]: E0202 09:27:14.897416 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.051817 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rqpbp"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.076687 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mxwpk"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.084998 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ad5e-account-create-update-lk57d"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.093157 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-278f-account-create-update-pgskx"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.100812 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-snbz6"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.109527 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rqpbp"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.117375 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mxwpk"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.125120 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-278f-account-create-update-pgskx"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.133100 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ad5e-account-create-update-lk57d"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.140827 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-snbz6"] Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.905276 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b62cf19-56cf-4b24-bf4b-417906e61501" path="/var/lib/kubelet/pods/0b62cf19-56cf-4b24-bf4b-417906e61501/volumes" Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.906096 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1550113c-09da-4c3e-9ee1-cd4f28eaa995" path="/var/lib/kubelet/pods/1550113c-09da-4c3e-9ee1-cd4f28eaa995/volumes" Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.906955 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db10941-ba5e-445a-a995-bd1493d5270c" path="/var/lib/kubelet/pods/8db10941-ba5e-445a-a995-bd1493d5270c/volumes" Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.907779 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92fd9095-5cd6-4a99-a5a0-fda750c1a6b7" path="/var/lib/kubelet/pods/92fd9095-5cd6-4a99-a5a0-fda750c1a6b7/volumes" Feb 02 09:27:16 crc kubenswrapper[4720]: I0202 09:27:16.909233 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20dd138-dcb5-4c76-905c-b9eb86dfd50b" path="/var/lib/kubelet/pods/c20dd138-dcb5-4c76-905c-b9eb86dfd50b/volumes" Feb 02 09:27:17 crc kubenswrapper[4720]: I0202 09:27:17.034285 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9e5c-account-create-update-z6qrn"] Feb 02 09:27:17 crc kubenswrapper[4720]: I0202 09:27:17.045552 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9e5c-account-create-update-z6qrn"] Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.677644 4720 scope.go:117] "RemoveContainer" containerID="5094496fd41abae7f3c3bc4dbf1dc174bc8e2d5103def211499ce0a7b066b99a" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.706638 4720 scope.go:117] "RemoveContainer" containerID="d8a029d1bb263f18a357d51c825b927f733c14016455b59326f5862db1ca7d70" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.759092 4720 scope.go:117] "RemoveContainer" containerID="bad88ee43365b1ca57cd5723f8557d4c9b72ce9481ad29345c4ac151b6647a2c" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.813773 4720 scope.go:117] "RemoveContainer" containerID="b408d649ad6772f3b609eb9cb1867148f1db080f316b2797988f8b830f7273de" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.856541 4720 scope.go:117] "RemoveContainer" containerID="24d619b6a5163d32fdf6e0589a2e7db3bb5de342f77e5ce9f553f69589b7a4b7" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.892025 4720 scope.go:117] "RemoveContainer" containerID="ff83524e6f42165caa034f575caa9f05540b70f11de0c1a3dbe9f44dfe2917d2" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.902082 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87af8537-923b-4bee-8c85-aa7f3d179b6d" path="/var/lib/kubelet/pods/87af8537-923b-4bee-8c85-aa7f3d179b6d/volumes" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.944763 4720 scope.go:117] "RemoveContainer" containerID="304bf2f7a1577cc0a68f62be1ea364ca057b1ff27d74958df943220cf45b8721" Feb 02 09:27:18 crc kubenswrapper[4720]: I0202 09:27:18.984160 4720 scope.go:117] "RemoveContainer" containerID="70a32d6ceb128dad3d55b3f12ea7cf9655b112a67fe2977c604c626098348aaa" Feb 02 09:27:19 crc kubenswrapper[4720]: I0202 09:27:19.013643 4720 scope.go:117] "RemoveContainer" containerID="6d4b8f1c0d49159ae3c41a2922382a62d824228a29cd43542a7fd3c874b68fb5" Feb 02 09:27:19 crc kubenswrapper[4720]: I0202 09:27:19.040165 4720 scope.go:117] "RemoveContainer" containerID="42ed8c4dec72e85c0b1fb4a0aaecdb8414decee7e51d2c4c002472790c8afb46" Feb 02 09:27:29 crc kubenswrapper[4720]: I0202 09:27:29.887420 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:27:29 crc kubenswrapper[4720]: E0202 09:27:29.888549 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:27:43 crc kubenswrapper[4720]: I0202 09:27:43.034897 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vhqsn"] Feb 02 09:27:43 crc kubenswrapper[4720]: I0202 09:27:43.045117 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vhqsn"] Feb 02 09:27:43 crc kubenswrapper[4720]: I0202 09:27:43.887493 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:27:43 crc kubenswrapper[4720]: E0202 09:27:43.888158 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:27:44 crc kubenswrapper[4720]: I0202 09:27:44.898172 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb" path="/var/lib/kubelet/pods/f40e95f4-8ccc-4d4c-a3b8-5e491c59ebbb/volumes" Feb 02 09:27:55 crc kubenswrapper[4720]: I0202 09:27:55.886820 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:27:55 crc kubenswrapper[4720]: E0202 09:27:55.887624 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:28:00 crc kubenswrapper[4720]: I0202 09:28:00.994055 4720 generic.go:334] "Generic (PLEG): container finished" podID="58baee1a-0156-461f-9be3-2a44ffedecdb" containerID="a16d24e6ca03caba478d9d3a4d8f4b1b98542073a31b712b0feb7ae9cb512682" exitCode=0 Feb 02 09:28:00 crc kubenswrapper[4720]: I0202 09:28:00.994154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" event={"ID":"58baee1a-0156-461f-9be3-2a44ffedecdb","Type":"ContainerDied","Data":"a16d24e6ca03caba478d9d3a4d8f4b1b98542073a31b712b0feb7ae9cb512682"} Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.498123 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.614711 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-inventory\") pod \"58baee1a-0156-461f-9be3-2a44ffedecdb\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.614822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-ssh-key-openstack-edpm-ipam\") pod \"58baee1a-0156-461f-9be3-2a44ffedecdb\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.615111 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbgvh\" (UniqueName: \"kubernetes.io/projected/58baee1a-0156-461f-9be3-2a44ffedecdb-kube-api-access-jbgvh\") pod \"58baee1a-0156-461f-9be3-2a44ffedecdb\" (UID: \"58baee1a-0156-461f-9be3-2a44ffedecdb\") " Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.620350 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58baee1a-0156-461f-9be3-2a44ffedecdb-kube-api-access-jbgvh" (OuterVolumeSpecName: "kube-api-access-jbgvh") pod "58baee1a-0156-461f-9be3-2a44ffedecdb" (UID: "58baee1a-0156-461f-9be3-2a44ffedecdb"). InnerVolumeSpecName "kube-api-access-jbgvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.640365 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-inventory" (OuterVolumeSpecName: "inventory") pod "58baee1a-0156-461f-9be3-2a44ffedecdb" (UID: "58baee1a-0156-461f-9be3-2a44ffedecdb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.642074 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58baee1a-0156-461f-9be3-2a44ffedecdb" (UID: "58baee1a-0156-461f-9be3-2a44ffedecdb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.717635 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbgvh\" (UniqueName: \"kubernetes.io/projected/58baee1a-0156-461f-9be3-2a44ffedecdb-kube-api-access-jbgvh\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.717675 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:02 crc kubenswrapper[4720]: I0202 09:28:02.717689 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58baee1a-0156-461f-9be3-2a44ffedecdb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.016222 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" event={"ID":"58baee1a-0156-461f-9be3-2a44ffedecdb","Type":"ContainerDied","Data":"eea8b6d4f98989f922f334e99a1a8beaeefa505bce1b52f14dae8322610a4cb7"} Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.016271 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea8b6d4f98989f922f334e99a1a8beaeefa505bce1b52f14dae8322610a4cb7" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.016373 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.113270 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvg7b"] Feb 02 09:28:03 crc kubenswrapper[4720]: E0202 09:28:03.113666 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58baee1a-0156-461f-9be3-2a44ffedecdb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.113682 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="58baee1a-0156-461f-9be3-2a44ffedecdb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.113857 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="58baee1a-0156-461f-9be3-2a44ffedecdb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.114569 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.123685 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvg7b"] Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.159793 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.160056 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.160190 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.160438 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.226218 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlq7d\" (UniqueName: \"kubernetes.io/projected/9728f5ea-ee17-42d8-a297-958b3247e48e-kube-api-access-qlq7d\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.226288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.226488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.328287 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlq7d\" (UniqueName: \"kubernetes.io/projected/9728f5ea-ee17-42d8-a297-958b3247e48e-kube-api-access-qlq7d\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.328338 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.328400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.332852 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.343655 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.344963 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlq7d\" (UniqueName: \"kubernetes.io/projected/9728f5ea-ee17-42d8-a297-958b3247e48e-kube-api-access-qlq7d\") pod \"ssh-known-hosts-edpm-deployment-qvg7b\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:03 crc kubenswrapper[4720]: I0202 09:28:03.471469 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:04 crc kubenswrapper[4720]: I0202 09:28:04.072745 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qvg7b"] Feb 02 09:28:05 crc kubenswrapper[4720]: I0202 09:28:05.038346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" event={"ID":"9728f5ea-ee17-42d8-a297-958b3247e48e","Type":"ContainerStarted","Data":"cff0cea7655e50802dbd94562bb06cf7c612155d44aaa06a3f89c6def2ad1c85"} Feb 02 09:28:05 crc kubenswrapper[4720]: I0202 09:28:05.038732 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" event={"ID":"9728f5ea-ee17-42d8-a297-958b3247e48e","Type":"ContainerStarted","Data":"8f3011a1d51c17e06b6d4bf2ef4ca6399fe66b119d8a23820f998c9663f69c97"} Feb 02 09:28:05 crc kubenswrapper[4720]: I0202 09:28:05.058088 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" podStartSLOduration=1.619606395 podStartE2EDuration="2.058063615s" podCreationTimestamp="2026-02-02 09:28:03 +0000 UTC" firstStartedPulling="2026-02-02 09:28:04.073050592 +0000 UTC m=+1917.928676188" lastFinishedPulling="2026-02-02 09:28:04.511507822 +0000 UTC m=+1918.367133408" observedRunningTime="2026-02-02 09:28:05.05202753 +0000 UTC m=+1918.907653106" watchObservedRunningTime="2026-02-02 09:28:05.058063615 +0000 UTC m=+1918.913689181" Feb 02 09:28:09 crc kubenswrapper[4720]: I0202 09:28:09.887585 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:28:09 crc kubenswrapper[4720]: E0202 09:28:09.888452 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:28:12 crc kubenswrapper[4720]: I0202 09:28:12.110975 4720 generic.go:334] "Generic (PLEG): container finished" podID="9728f5ea-ee17-42d8-a297-958b3247e48e" containerID="cff0cea7655e50802dbd94562bb06cf7c612155d44aaa06a3f89c6def2ad1c85" exitCode=0 Feb 02 09:28:12 crc kubenswrapper[4720]: I0202 09:28:12.111401 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" event={"ID":"9728f5ea-ee17-42d8-a297-958b3247e48e","Type":"ContainerDied","Data":"cff0cea7655e50802dbd94562bb06cf7c612155d44aaa06a3f89c6def2ad1c85"} Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.509902 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.647048 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-inventory-0\") pod \"9728f5ea-ee17-42d8-a297-958b3247e48e\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.647110 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlq7d\" (UniqueName: \"kubernetes.io/projected/9728f5ea-ee17-42d8-a297-958b3247e48e-kube-api-access-qlq7d\") pod \"9728f5ea-ee17-42d8-a297-958b3247e48e\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.647338 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-ssh-key-openstack-edpm-ipam\") pod \"9728f5ea-ee17-42d8-a297-958b3247e48e\" (UID: \"9728f5ea-ee17-42d8-a297-958b3247e48e\") " Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.653670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9728f5ea-ee17-42d8-a297-958b3247e48e-kube-api-access-qlq7d" (OuterVolumeSpecName: "kube-api-access-qlq7d") pod "9728f5ea-ee17-42d8-a297-958b3247e48e" (UID: "9728f5ea-ee17-42d8-a297-958b3247e48e"). InnerVolumeSpecName "kube-api-access-qlq7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.676972 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9728f5ea-ee17-42d8-a297-958b3247e48e" (UID: "9728f5ea-ee17-42d8-a297-958b3247e48e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.680105 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9728f5ea-ee17-42d8-a297-958b3247e48e" (UID: "9728f5ea-ee17-42d8-a297-958b3247e48e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.749533 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.749571 4720 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9728f5ea-ee17-42d8-a297-958b3247e48e-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:13 crc kubenswrapper[4720]: I0202 09:28:13.749582 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlq7d\" (UniqueName: \"kubernetes.io/projected/9728f5ea-ee17-42d8-a297-958b3247e48e-kube-api-access-qlq7d\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.131365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" event={"ID":"9728f5ea-ee17-42d8-a297-958b3247e48e","Type":"ContainerDied","Data":"8f3011a1d51c17e06b6d4bf2ef4ca6399fe66b119d8a23820f998c9663f69c97"} Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.131733 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f3011a1d51c17e06b6d4bf2ef4ca6399fe66b119d8a23820f998c9663f69c97" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.131407 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qvg7b" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.227063 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8"] Feb 02 09:28:14 crc kubenswrapper[4720]: E0202 09:28:14.227871 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9728f5ea-ee17-42d8-a297-958b3247e48e" containerName="ssh-known-hosts-edpm-deployment" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.227909 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9728f5ea-ee17-42d8-a297-958b3247e48e" containerName="ssh-known-hosts-edpm-deployment" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.228362 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9728f5ea-ee17-42d8-a297-958b3247e48e" containerName="ssh-known-hosts-edpm-deployment" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.229329 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.241133 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.241488 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.241794 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.242104 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.264234 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8"] Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.361242 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.361296 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.361346 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnxm\" (UniqueName: \"kubernetes.io/projected/7c7deec2-a8b1-445c-8603-c781d4636bac-kube-api-access-mfnxm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.462762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.462818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.462866 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnxm\" (UniqueName: \"kubernetes.io/projected/7c7deec2-a8b1-445c-8603-c781d4636bac-kube-api-access-mfnxm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.471856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.480712 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.498349 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnxm\" (UniqueName: \"kubernetes.io/projected/7c7deec2-a8b1-445c-8603-c781d4636bac-kube-api-access-mfnxm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-skcc8\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:14 crc kubenswrapper[4720]: I0202 09:28:14.559571 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:15 crc kubenswrapper[4720]: I0202 09:28:15.142577 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8"] Feb 02 09:28:16 crc kubenswrapper[4720]: I0202 09:28:16.151134 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" event={"ID":"7c7deec2-a8b1-445c-8603-c781d4636bac","Type":"ContainerStarted","Data":"1815abad81ec1a747b3e5b9cc094cc1f28af30c4bb967f1268f03a40b9c24bbb"} Feb 02 09:28:16 crc kubenswrapper[4720]: I0202 09:28:16.151478 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" event={"ID":"7c7deec2-a8b1-445c-8603-c781d4636bac","Type":"ContainerStarted","Data":"f844ca3831861c12dc4a7ed96be5e4dc055da3adde3431029c96b965981d2d84"} Feb 02 09:28:16 crc kubenswrapper[4720]: I0202 09:28:16.168466 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" podStartSLOduration=1.661394158 podStartE2EDuration="2.168447646s" podCreationTimestamp="2026-02-02 09:28:14 +0000 UTC" firstStartedPulling="2026-02-02 09:28:15.157437894 +0000 UTC m=+1929.013063470" lastFinishedPulling="2026-02-02 09:28:15.664491382 +0000 UTC m=+1929.520116958" observedRunningTime="2026-02-02 09:28:16.167121684 +0000 UTC m=+1930.022747240" watchObservedRunningTime="2026-02-02 09:28:16.168447646 +0000 UTC m=+1930.024073202" Feb 02 09:28:19 crc kubenswrapper[4720]: I0202 09:28:19.053187 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9hz57"] Feb 02 09:28:19 crc kubenswrapper[4720]: I0202 09:28:19.061557 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9hz57"] Feb 02 09:28:19 crc kubenswrapper[4720]: I0202 09:28:19.069365 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kjlpf"] Feb 02 09:28:19 crc kubenswrapper[4720]: I0202 09:28:19.076151 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kjlpf"] Feb 02 09:28:19 crc kubenswrapper[4720]: I0202 09:28:19.219164 4720 scope.go:117] "RemoveContainer" containerID="88b2755c6980964cff99cdd98f32f6d0011c8f8313a7b742e3e8d892d180a360" Feb 02 09:28:19 crc kubenswrapper[4720]: I0202 09:28:19.260676 4720 scope.go:117] "RemoveContainer" containerID="40237fb9d5aeb41913032b7f3eea4f29d17a290a68fae49b0d0dae4a880ae1ad" Feb 02 09:28:20 crc kubenswrapper[4720]: I0202 09:28:20.887058 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:28:20 crc kubenswrapper[4720]: E0202 09:28:20.887802 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:28:20 crc kubenswrapper[4720]: I0202 09:28:20.901753 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699b60ee-c039-48cf-8aa4-da649552c691" path="/var/lib/kubelet/pods/699b60ee-c039-48cf-8aa4-da649552c691/volumes" Feb 02 09:28:20 crc kubenswrapper[4720]: I0202 09:28:20.902789 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5" path="/var/lib/kubelet/pods/6bc1d546-ab9b-4673-bc6d-edd1bef7d5b5/volumes" Feb 02 09:28:23 crc kubenswrapper[4720]: I0202 09:28:23.211551 4720 generic.go:334] "Generic (PLEG): container finished" podID="7c7deec2-a8b1-445c-8603-c781d4636bac" containerID="1815abad81ec1a747b3e5b9cc094cc1f28af30c4bb967f1268f03a40b9c24bbb" exitCode=0 Feb 02 09:28:23 crc kubenswrapper[4720]: I0202 09:28:23.211647 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" event={"ID":"7c7deec2-a8b1-445c-8603-c781d4636bac","Type":"ContainerDied","Data":"1815abad81ec1a747b3e5b9cc094cc1f28af30c4bb967f1268f03a40b9c24bbb"} Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.723186 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.877443 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-ssh-key-openstack-edpm-ipam\") pod \"7c7deec2-a8b1-445c-8603-c781d4636bac\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.877556 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnxm\" (UniqueName: \"kubernetes.io/projected/7c7deec2-a8b1-445c-8603-c781d4636bac-kube-api-access-mfnxm\") pod \"7c7deec2-a8b1-445c-8603-c781d4636bac\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.877723 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-inventory\") pod \"7c7deec2-a8b1-445c-8603-c781d4636bac\" (UID: \"7c7deec2-a8b1-445c-8603-c781d4636bac\") " Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.884113 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7deec2-a8b1-445c-8603-c781d4636bac-kube-api-access-mfnxm" (OuterVolumeSpecName: "kube-api-access-mfnxm") pod "7c7deec2-a8b1-445c-8603-c781d4636bac" (UID: "7c7deec2-a8b1-445c-8603-c781d4636bac"). InnerVolumeSpecName "kube-api-access-mfnxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.921842 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-inventory" (OuterVolumeSpecName: "inventory") pod "7c7deec2-a8b1-445c-8603-c781d4636bac" (UID: "7c7deec2-a8b1-445c-8603-c781d4636bac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.925916 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c7deec2-a8b1-445c-8603-c781d4636bac" (UID: "7c7deec2-a8b1-445c-8603-c781d4636bac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.979911 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.980144 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnxm\" (UniqueName: \"kubernetes.io/projected/7c7deec2-a8b1-445c-8603-c781d4636bac-kube-api-access-mfnxm\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:24 crc kubenswrapper[4720]: I0202 09:28:24.980204 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c7deec2-a8b1-445c-8603-c781d4636bac-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.232321 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" event={"ID":"7c7deec2-a8b1-445c-8603-c781d4636bac","Type":"ContainerDied","Data":"f844ca3831861c12dc4a7ed96be5e4dc055da3adde3431029c96b965981d2d84"} Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.232613 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f844ca3831861c12dc4a7ed96be5e4dc055da3adde3431029c96b965981d2d84" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.232687 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-skcc8" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.318228 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv"] Feb 02 09:28:25 crc kubenswrapper[4720]: E0202 09:28:25.318788 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7deec2-a8b1-445c-8603-c781d4636bac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.318855 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7deec2-a8b1-445c-8603-c781d4636bac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.319143 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c7deec2-a8b1-445c-8603-c781d4636bac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.319822 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.322812 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.323066 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.323404 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.328718 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.336988 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv"] Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.388469 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knjzz\" (UniqueName: \"kubernetes.io/projected/4c39f840-a7fd-482a-87c1-a2bd895325f1-kube-api-access-knjzz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.388922 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.389076 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.491313 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.491388 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.491445 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knjzz\" (UniqueName: \"kubernetes.io/projected/4c39f840-a7fd-482a-87c1-a2bd895325f1-kube-api-access-knjzz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.496030 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.512759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.513072 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knjzz\" (UniqueName: \"kubernetes.io/projected/4c39f840-a7fd-482a-87c1-a2bd895325f1-kube-api-access-knjzz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:25 crc kubenswrapper[4720]: I0202 09:28:25.637489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:26 crc kubenswrapper[4720]: I0202 09:28:26.159621 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv"] Feb 02 09:28:26 crc kubenswrapper[4720]: W0202 09:28:26.164347 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c39f840_a7fd_482a_87c1_a2bd895325f1.slice/crio-926d3bb917bb882829f93626c78e1c36c2ae4b647175b66774d79a6ced49ec62 WatchSource:0}: Error finding container 926d3bb917bb882829f93626c78e1c36c2ae4b647175b66774d79a6ced49ec62: Status 404 returned error can't find the container with id 926d3bb917bb882829f93626c78e1c36c2ae4b647175b66774d79a6ced49ec62 Feb 02 09:28:26 crc kubenswrapper[4720]: I0202 09:28:26.263767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" event={"ID":"4c39f840-a7fd-482a-87c1-a2bd895325f1","Type":"ContainerStarted","Data":"926d3bb917bb882829f93626c78e1c36c2ae4b647175b66774d79a6ced49ec62"} Feb 02 09:28:28 crc kubenswrapper[4720]: I0202 09:28:28.281049 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" event={"ID":"4c39f840-a7fd-482a-87c1-a2bd895325f1","Type":"ContainerStarted","Data":"2a218fb52a8b49888a0c676a6b3056d8f749eabc541afe52d26c0f64931fff40"} Feb 02 09:28:28 crc kubenswrapper[4720]: I0202 09:28:28.310869 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" podStartSLOduration=2.338652952 podStartE2EDuration="3.310852255s" podCreationTimestamp="2026-02-02 09:28:25 +0000 UTC" firstStartedPulling="2026-02-02 09:28:26.167426558 +0000 UTC m=+1940.023052104" lastFinishedPulling="2026-02-02 09:28:27.139625841 +0000 UTC m=+1940.995251407" observedRunningTime="2026-02-02 09:28:28.300100876 +0000 UTC m=+1942.155726442" watchObservedRunningTime="2026-02-02 09:28:28.310852255 +0000 UTC m=+1942.166477801" Feb 02 09:28:31 crc kubenswrapper[4720]: I0202 09:28:31.887454 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:28:31 crc kubenswrapper[4720]: E0202 09:28:31.889674 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:28:36 crc kubenswrapper[4720]: I0202 09:28:36.366325 4720 generic.go:334] "Generic (PLEG): container finished" podID="4c39f840-a7fd-482a-87c1-a2bd895325f1" containerID="2a218fb52a8b49888a0c676a6b3056d8f749eabc541afe52d26c0f64931fff40" exitCode=0 Feb 02 09:28:36 crc kubenswrapper[4720]: I0202 09:28:36.366431 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" event={"ID":"4c39f840-a7fd-482a-87c1-a2bd895325f1","Type":"ContainerDied","Data":"2a218fb52a8b49888a0c676a6b3056d8f749eabc541afe52d26c0f64931fff40"} Feb 02 09:28:37 crc kubenswrapper[4720]: I0202 09:28:37.859345 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:37 crc kubenswrapper[4720]: I0202 09:28:37.953350 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knjzz\" (UniqueName: \"kubernetes.io/projected/4c39f840-a7fd-482a-87c1-a2bd895325f1-kube-api-access-knjzz\") pod \"4c39f840-a7fd-482a-87c1-a2bd895325f1\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " Feb 02 09:28:37 crc kubenswrapper[4720]: I0202 09:28:37.953465 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-inventory\") pod \"4c39f840-a7fd-482a-87c1-a2bd895325f1\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " Feb 02 09:28:37 crc kubenswrapper[4720]: I0202 09:28:37.953497 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-ssh-key-openstack-edpm-ipam\") pod \"4c39f840-a7fd-482a-87c1-a2bd895325f1\" (UID: \"4c39f840-a7fd-482a-87c1-a2bd895325f1\") " Feb 02 09:28:37 crc kubenswrapper[4720]: I0202 09:28:37.977257 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c39f840-a7fd-482a-87c1-a2bd895325f1-kube-api-access-knjzz" (OuterVolumeSpecName: "kube-api-access-knjzz") pod "4c39f840-a7fd-482a-87c1-a2bd895325f1" (UID: "4c39f840-a7fd-482a-87c1-a2bd895325f1"). InnerVolumeSpecName "kube-api-access-knjzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:28:37 crc kubenswrapper[4720]: I0202 09:28:37.989707 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-inventory" (OuterVolumeSpecName: "inventory") pod "4c39f840-a7fd-482a-87c1-a2bd895325f1" (UID: "4c39f840-a7fd-482a-87c1-a2bd895325f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.005463 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4c39f840-a7fd-482a-87c1-a2bd895325f1" (UID: "4c39f840-a7fd-482a-87c1-a2bd895325f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.057074 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knjzz\" (UniqueName: \"kubernetes.io/projected/4c39f840-a7fd-482a-87c1-a2bd895325f1-kube-api-access-knjzz\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.057146 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.057162 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c39f840-a7fd-482a-87c1-a2bd895325f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.386478 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" event={"ID":"4c39f840-a7fd-482a-87c1-a2bd895325f1","Type":"ContainerDied","Data":"926d3bb917bb882829f93626c78e1c36c2ae4b647175b66774d79a6ced49ec62"} Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.386791 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="926d3bb917bb882829f93626c78e1c36c2ae4b647175b66774d79a6ced49ec62" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.386542 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.460538 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp"] Feb 02 09:28:38 crc kubenswrapper[4720]: E0202 09:28:38.460904 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c39f840-a7fd-482a-87c1-a2bd895325f1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.460917 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c39f840-a7fd-482a-87c1-a2bd895325f1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.461461 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c39f840-a7fd-482a-87c1-a2bd895325f1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.462114 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465432 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465455 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465493 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465432 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465692 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465829 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.465901 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.466364 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.486517 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp"] Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565243 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565286 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565323 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565411 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565777 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565872 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.565990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566083 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxcb\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-kube-api-access-qbxcb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566176 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566254 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566369 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566486 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566610 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.566766 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669150 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669191 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669212 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669240 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbxcb\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-kube-api-access-qbxcb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669291 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669339 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669383 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669435 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669458 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.669505 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.673239 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.673497 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.674012 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.675911 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.677098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.678622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.684776 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.685072 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.685138 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.685791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.686580 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.687154 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.689274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbxcb\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-kube-api-access-qbxcb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.693046 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:38 crc kubenswrapper[4720]: I0202 09:28:38.790515 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:28:39 crc kubenswrapper[4720]: I0202 09:28:39.499467 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp"] Feb 02 09:28:40 crc kubenswrapper[4720]: I0202 09:28:40.408700 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" event={"ID":"784ddeb5-955a-4e2e-a5c8-405f97d93cdb","Type":"ContainerStarted","Data":"937e05874e85b3d9dc2addb854251e0b6f58e674134e5c7c0e74c0516b69adbc"} Feb 02 09:28:40 crc kubenswrapper[4720]: I0202 09:28:40.409997 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" event={"ID":"784ddeb5-955a-4e2e-a5c8-405f97d93cdb","Type":"ContainerStarted","Data":"cbc69425bfb30c9dd241a4ded7e39129919e4493a062436f7bd4f608b7ae4d1d"} Feb 02 09:28:40 crc kubenswrapper[4720]: I0202 09:28:40.447814 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" podStartSLOduration=1.973840076 podStartE2EDuration="2.447787964s" podCreationTimestamp="2026-02-02 09:28:38 +0000 UTC" firstStartedPulling="2026-02-02 09:28:39.512746029 +0000 UTC m=+1953.368371585" lastFinishedPulling="2026-02-02 09:28:39.986693917 +0000 UTC m=+1953.842319473" observedRunningTime="2026-02-02 09:28:40.432368281 +0000 UTC m=+1954.287993837" watchObservedRunningTime="2026-02-02 09:28:40.447787964 +0000 UTC m=+1954.303413520" Feb 02 09:28:45 crc kubenswrapper[4720]: I0202 09:28:45.943306 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:28:45 crc kubenswrapper[4720]: E0202 09:28:45.944682 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:28:57 crc kubenswrapper[4720]: I0202 09:28:57.887010 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:28:57 crc kubenswrapper[4720]: E0202 09:28:57.887737 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:29:03 crc kubenswrapper[4720]: I0202 09:29:03.054983 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dnfdm"] Feb 02 09:29:03 crc kubenswrapper[4720]: I0202 09:29:03.067206 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dnfdm"] Feb 02 09:29:04 crc kubenswrapper[4720]: I0202 09:29:04.897816 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39469b5-2d0c-4ae1-9aab-5ca2027938d9" path="/var/lib/kubelet/pods/d39469b5-2d0c-4ae1-9aab-5ca2027938d9/volumes" Feb 02 09:29:09 crc kubenswrapper[4720]: I0202 09:29:09.887272 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:29:09 crc kubenswrapper[4720]: E0202 09:29:09.888147 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:29:15 crc kubenswrapper[4720]: I0202 09:29:15.767740 4720 generic.go:334] "Generic (PLEG): container finished" podID="784ddeb5-955a-4e2e-a5c8-405f97d93cdb" containerID="937e05874e85b3d9dc2addb854251e0b6f58e674134e5c7c0e74c0516b69adbc" exitCode=0 Feb 02 09:29:15 crc kubenswrapper[4720]: I0202 09:29:15.767777 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" event={"ID":"784ddeb5-955a-4e2e-a5c8-405f97d93cdb","Type":"ContainerDied","Data":"937e05874e85b3d9dc2addb854251e0b6f58e674134e5c7c0e74c0516b69adbc"} Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.279260 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.433553 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.433915 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-neutron-metadata-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.433939 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.433964 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-nova-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.433998 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbxcb\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-kube-api-access-qbxcb\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434065 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-inventory\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434142 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ssh-key-openstack-edpm-ipam\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434202 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-telemetry-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434223 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434263 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-libvirt-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434283 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-bootstrap-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434304 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ovn-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.434326 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-repo-setup-combined-ca-bundle\") pod \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\" (UID: \"784ddeb5-955a-4e2e-a5c8-405f97d93cdb\") " Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.440260 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.440522 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.441503 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-kube-api-access-qbxcb" (OuterVolumeSpecName: "kube-api-access-qbxcb") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "kube-api-access-qbxcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.441712 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.443252 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.443652 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.443709 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.444491 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.444498 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.444625 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.445235 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.449292 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.478580 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.480420 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-inventory" (OuterVolumeSpecName: "inventory") pod "784ddeb5-955a-4e2e-a5c8-405f97d93cdb" (UID: "784ddeb5-955a-4e2e-a5c8-405f97d93cdb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536666 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536707 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536719 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536729 4720 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536738 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536748 4720 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536757 4720 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536765 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536773 4720 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536782 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536791 4720 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536799 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536810 4720 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.536821 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbxcb\" (UniqueName: \"kubernetes.io/projected/784ddeb5-955a-4e2e-a5c8-405f97d93cdb-kube-api-access-qbxcb\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.795161 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" event={"ID":"784ddeb5-955a-4e2e-a5c8-405f97d93cdb","Type":"ContainerDied","Data":"cbc69425bfb30c9dd241a4ded7e39129919e4493a062436f7bd4f608b7ae4d1d"} Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.795218 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc69425bfb30c9dd241a4ded7e39129919e4493a062436f7bd4f608b7ae4d1d" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.795270 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.908063 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67"] Feb 02 09:29:17 crc kubenswrapper[4720]: E0202 09:29:17.908427 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784ddeb5-955a-4e2e-a5c8-405f97d93cdb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.908444 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="784ddeb5-955a-4e2e-a5c8-405f97d93cdb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.908619 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="784ddeb5-955a-4e2e-a5c8-405f97d93cdb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.909233 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.912250 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.912424 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.912431 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.912555 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.913089 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:29:17 crc kubenswrapper[4720]: I0202 09:29:17.926506 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67"] Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.057196 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.057558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.057686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.057836 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzcgt\" (UniqueName: \"kubernetes.io/projected/6a454a20-f16c-4627-8c70-65e3ea30a26d-kube-api-access-zzcgt\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.058372 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.160951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.161061 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.161122 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.161154 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.161194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzcgt\" (UniqueName: \"kubernetes.io/projected/6a454a20-f16c-4627-8c70-65e3ea30a26d-kube-api-access-zzcgt\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.163475 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.165805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.168504 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.168621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.185751 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzcgt\" (UniqueName: \"kubernetes.io/projected/6a454a20-f16c-4627-8c70-65e3ea30a26d-kube-api-access-zzcgt\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rss67\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.229139 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.772792 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67"] Feb 02 09:29:18 crc kubenswrapper[4720]: I0202 09:29:18.805655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" event={"ID":"6a454a20-f16c-4627-8c70-65e3ea30a26d","Type":"ContainerStarted","Data":"6f3e5166b716891699d268e2a6dc882799d3ed03f3e1878039373cd57b9ffeb3"} Feb 02 09:29:19 crc kubenswrapper[4720]: I0202 09:29:19.354544 4720 scope.go:117] "RemoveContainer" containerID="72706881b5cf595de4332106d140a68ee4816eae6f02406185e12f4bc571999e" Feb 02 09:29:19 crc kubenswrapper[4720]: I0202 09:29:19.435482 4720 scope.go:117] "RemoveContainer" containerID="00ffd2ae4abda204da28629bd4157dcb779f63e798180503776e18c9637d9287" Feb 02 09:29:19 crc kubenswrapper[4720]: I0202 09:29:19.487804 4720 scope.go:117] "RemoveContainer" containerID="43ab3004951c43990c2e2ee9a9b05d3a6593fcde9731567f5f1f8de6fc8c4113" Feb 02 09:29:19 crc kubenswrapper[4720]: I0202 09:29:19.819501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" event={"ID":"6a454a20-f16c-4627-8c70-65e3ea30a26d","Type":"ContainerStarted","Data":"818cba41c3cb67e9859d8ac5353f00fd5da0840b477763989af04137224bc960"} Feb 02 09:29:19 crc kubenswrapper[4720]: I0202 09:29:19.841016 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" podStartSLOduration=2.351005931 podStartE2EDuration="2.840998467s" podCreationTimestamp="2026-02-02 09:29:17 +0000 UTC" firstStartedPulling="2026-02-02 09:29:18.777321912 +0000 UTC m=+1992.632947468" lastFinishedPulling="2026-02-02 09:29:19.267314448 +0000 UTC m=+1993.122940004" observedRunningTime="2026-02-02 09:29:19.840777002 +0000 UTC m=+1993.696402608" watchObservedRunningTime="2026-02-02 09:29:19.840998467 +0000 UTC m=+1993.696624023" Feb 02 09:29:22 crc kubenswrapper[4720]: I0202 09:29:22.887210 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:29:23 crc kubenswrapper[4720]: I0202 09:29:23.867729 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"3abce72ff32e36d5e24d37c94c08f594bd64efc91ca435e3c2ca1d3db2b3b20f"} Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.539342 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-76hx2"] Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.542041 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.557950 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76hx2"] Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.708706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsnz4\" (UniqueName: \"kubernetes.io/projected/3aa6d2f5-f447-4311-9706-99877c6ae159-kube-api-access-fsnz4\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.708789 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-catalog-content\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.708896 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-utilities\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.810559 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsnz4\" (UniqueName: \"kubernetes.io/projected/3aa6d2f5-f447-4311-9706-99877c6ae159-kube-api-access-fsnz4\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.810689 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-catalog-content\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.811258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-catalog-content\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.811427 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-utilities\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.811723 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-utilities\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.829176 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsnz4\" (UniqueName: \"kubernetes.io/projected/3aa6d2f5-f447-4311-9706-99877c6ae159-kube-api-access-fsnz4\") pod \"redhat-operators-76hx2\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:41 crc kubenswrapper[4720]: I0202 09:29:41.868560 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:42 crc kubenswrapper[4720]: I0202 09:29:42.327192 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76hx2"] Feb 02 09:29:42 crc kubenswrapper[4720]: W0202 09:29:42.332425 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aa6d2f5_f447_4311_9706_99877c6ae159.slice/crio-d2d1a68b786acdce3fc76c38aceafa633f38893412ea5285a8d3276837825431 WatchSource:0}: Error finding container d2d1a68b786acdce3fc76c38aceafa633f38893412ea5285a8d3276837825431: Status 404 returned error can't find the container with id d2d1a68b786acdce3fc76c38aceafa633f38893412ea5285a8d3276837825431 Feb 02 09:29:43 crc kubenswrapper[4720]: I0202 09:29:43.069591 4720 generic.go:334] "Generic (PLEG): container finished" podID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerID="f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b" exitCode=0 Feb 02 09:29:43 crc kubenswrapper[4720]: I0202 09:29:43.069874 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76hx2" event={"ID":"3aa6d2f5-f447-4311-9706-99877c6ae159","Type":"ContainerDied","Data":"f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b"} Feb 02 09:29:43 crc kubenswrapper[4720]: I0202 09:29:43.069932 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76hx2" event={"ID":"3aa6d2f5-f447-4311-9706-99877c6ae159","Type":"ContainerStarted","Data":"d2d1a68b786acdce3fc76c38aceafa633f38893412ea5285a8d3276837825431"} Feb 02 09:29:45 crc kubenswrapper[4720]: I0202 09:29:45.095099 4720 generic.go:334] "Generic (PLEG): container finished" podID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerID="9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0" exitCode=0 Feb 02 09:29:45 crc kubenswrapper[4720]: I0202 09:29:45.095177 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76hx2" event={"ID":"3aa6d2f5-f447-4311-9706-99877c6ae159","Type":"ContainerDied","Data":"9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0"} Feb 02 09:29:46 crc kubenswrapper[4720]: I0202 09:29:46.111961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76hx2" event={"ID":"3aa6d2f5-f447-4311-9706-99877c6ae159","Type":"ContainerStarted","Data":"fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f"} Feb 02 09:29:46 crc kubenswrapper[4720]: I0202 09:29:46.141913 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-76hx2" podStartSLOduration=2.423709193 podStartE2EDuration="5.141892287s" podCreationTimestamp="2026-02-02 09:29:41 +0000 UTC" firstStartedPulling="2026-02-02 09:29:43.071833767 +0000 UTC m=+2016.927459323" lastFinishedPulling="2026-02-02 09:29:45.790016821 +0000 UTC m=+2019.645642417" observedRunningTime="2026-02-02 09:29:46.133966426 +0000 UTC m=+2019.989592032" watchObservedRunningTime="2026-02-02 09:29:46.141892287 +0000 UTC m=+2019.997517853" Feb 02 09:29:48 crc kubenswrapper[4720]: I0202 09:29:48.935275 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhhxc"] Feb 02 09:29:48 crc kubenswrapper[4720]: I0202 09:29:48.937540 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:48 crc kubenswrapper[4720]: I0202 09:29:48.945985 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhhxc"] Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.075187 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glwjn\" (UniqueName: \"kubernetes.io/projected/0cf8570c-5e87-4c79-a908-60312c292756-kube-api-access-glwjn\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.075249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-utilities\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.075289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-catalog-content\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.177631 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glwjn\" (UniqueName: \"kubernetes.io/projected/0cf8570c-5e87-4c79-a908-60312c292756-kube-api-access-glwjn\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.177676 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-utilities\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.177737 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-catalog-content\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.178281 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-utilities\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.178313 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-catalog-content\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.201842 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glwjn\" (UniqueName: \"kubernetes.io/projected/0cf8570c-5e87-4c79-a908-60312c292756-kube-api-access-glwjn\") pod \"redhat-marketplace-mhhxc\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.285353 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:49 crc kubenswrapper[4720]: I0202 09:29:49.840214 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhhxc"] Feb 02 09:29:50 crc kubenswrapper[4720]: I0202 09:29:50.152378 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerStarted","Data":"800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735"} Feb 02 09:29:50 crc kubenswrapper[4720]: I0202 09:29:50.152434 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerStarted","Data":"0223a893dbdf035c3510c5ddb26d4a0829bf7258218e579eb04c256a62657181"} Feb 02 09:29:51 crc kubenswrapper[4720]: I0202 09:29:51.162056 4720 generic.go:334] "Generic (PLEG): container finished" podID="0cf8570c-5e87-4c79-a908-60312c292756" containerID="800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735" exitCode=0 Feb 02 09:29:51 crc kubenswrapper[4720]: I0202 09:29:51.162263 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerDied","Data":"800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735"} Feb 02 09:29:51 crc kubenswrapper[4720]: I0202 09:29:51.869050 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:51 crc kubenswrapper[4720]: I0202 09:29:51.869383 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:51 crc kubenswrapper[4720]: I0202 09:29:51.915526 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:52 crc kubenswrapper[4720]: I0202 09:29:52.173304 4720 generic.go:334] "Generic (PLEG): container finished" podID="0cf8570c-5e87-4c79-a908-60312c292756" containerID="e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837" exitCode=0 Feb 02 09:29:52 crc kubenswrapper[4720]: I0202 09:29:52.173423 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerDied","Data":"e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837"} Feb 02 09:29:52 crc kubenswrapper[4720]: I0202 09:29:52.230329 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:53 crc kubenswrapper[4720]: I0202 09:29:53.183445 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerStarted","Data":"94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0"} Feb 02 09:29:53 crc kubenswrapper[4720]: I0202 09:29:53.201413 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhhxc" podStartSLOduration=3.688794156 podStartE2EDuration="5.201394874s" podCreationTimestamp="2026-02-02 09:29:48 +0000 UTC" firstStartedPulling="2026-02-02 09:29:51.164222146 +0000 UTC m=+2025.019847712" lastFinishedPulling="2026-02-02 09:29:52.676822874 +0000 UTC m=+2026.532448430" observedRunningTime="2026-02-02 09:29:53.199029608 +0000 UTC m=+2027.054655184" watchObservedRunningTime="2026-02-02 09:29:53.201394874 +0000 UTC m=+2027.057020430" Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.294680 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76hx2"] Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.295277 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-76hx2" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="registry-server" containerID="cri-o://fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f" gracePeriod=2 Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.757546 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.928495 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-catalog-content\") pod \"3aa6d2f5-f447-4311-9706-99877c6ae159\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.928639 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsnz4\" (UniqueName: \"kubernetes.io/projected/3aa6d2f5-f447-4311-9706-99877c6ae159-kube-api-access-fsnz4\") pod \"3aa6d2f5-f447-4311-9706-99877c6ae159\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.929641 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-utilities\") pod \"3aa6d2f5-f447-4311-9706-99877c6ae159\" (UID: \"3aa6d2f5-f447-4311-9706-99877c6ae159\") " Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.930510 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-utilities" (OuterVolumeSpecName: "utilities") pod "3aa6d2f5-f447-4311-9706-99877c6ae159" (UID: "3aa6d2f5-f447-4311-9706-99877c6ae159"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:29:54 crc kubenswrapper[4720]: I0202 09:29:54.947747 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa6d2f5-f447-4311-9706-99877c6ae159-kube-api-access-fsnz4" (OuterVolumeSpecName: "kube-api-access-fsnz4") pod "3aa6d2f5-f447-4311-9706-99877c6ae159" (UID: "3aa6d2f5-f447-4311-9706-99877c6ae159"). InnerVolumeSpecName "kube-api-access-fsnz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.031848 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.031892 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsnz4\" (UniqueName: \"kubernetes.io/projected/3aa6d2f5-f447-4311-9706-99877c6ae159-kube-api-access-fsnz4\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.060371 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aa6d2f5-f447-4311-9706-99877c6ae159" (UID: "3aa6d2f5-f447-4311-9706-99877c6ae159"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.133308 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa6d2f5-f447-4311-9706-99877c6ae159-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.205266 4720 generic.go:334] "Generic (PLEG): container finished" podID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerID="fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f" exitCode=0 Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.205349 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76hx2" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.205335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76hx2" event={"ID":"3aa6d2f5-f447-4311-9706-99877c6ae159","Type":"ContainerDied","Data":"fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f"} Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.205903 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76hx2" event={"ID":"3aa6d2f5-f447-4311-9706-99877c6ae159","Type":"ContainerDied","Data":"d2d1a68b786acdce3fc76c38aceafa633f38893412ea5285a8d3276837825431"} Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.205940 4720 scope.go:117] "RemoveContainer" containerID="fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.224242 4720 scope.go:117] "RemoveContainer" containerID="9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.246083 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76hx2"] Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.262756 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-76hx2"] Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.275847 4720 scope.go:117] "RemoveContainer" containerID="f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.319943 4720 scope.go:117] "RemoveContainer" containerID="fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f" Feb 02 09:29:55 crc kubenswrapper[4720]: E0202 09:29:55.320384 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f\": container with ID starting with fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f not found: ID does not exist" containerID="fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.320407 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f"} err="failed to get container status \"fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f\": rpc error: code = NotFound desc = could not find container \"fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f\": container with ID starting with fd39c7313e7db24ea95dbffb0d6582e49ef72c18aca7354116ad0b7ffd2f875f not found: ID does not exist" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.320430 4720 scope.go:117] "RemoveContainer" containerID="9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0" Feb 02 09:29:55 crc kubenswrapper[4720]: E0202 09:29:55.320873 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0\": container with ID starting with 9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0 not found: ID does not exist" containerID="9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.320912 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0"} err="failed to get container status \"9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0\": rpc error: code = NotFound desc = could not find container \"9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0\": container with ID starting with 9a618c86613912d62a4b5f2a8beaed5d608ff0b98e17115e4534ab34c30058f0 not found: ID does not exist" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.320925 4720 scope.go:117] "RemoveContainer" containerID="f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b" Feb 02 09:29:55 crc kubenswrapper[4720]: E0202 09:29:55.321199 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b\": container with ID starting with f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b not found: ID does not exist" containerID="f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b" Feb 02 09:29:55 crc kubenswrapper[4720]: I0202 09:29:55.321225 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b"} err="failed to get container status \"f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b\": rpc error: code = NotFound desc = could not find container \"f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b\": container with ID starting with f7cffcda71d72f330c6594ce144ec7a0c15de5163e3ed6e5991f63233b7ab70b not found: ID does not exist" Feb 02 09:29:56 crc kubenswrapper[4720]: I0202 09:29:56.901214 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" path="/var/lib/kubelet/pods/3aa6d2f5-f447-4311-9706-99877c6ae159/volumes" Feb 02 09:29:59 crc kubenswrapper[4720]: I0202 09:29:59.287852 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:59 crc kubenswrapper[4720]: I0202 09:29:59.288280 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:29:59 crc kubenswrapper[4720]: I0202 09:29:59.350310 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.148431 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5"] Feb 02 09:30:00 crc kubenswrapper[4720]: E0202 09:30:00.148854 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="extract-content" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.148868 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="extract-content" Feb 02 09:30:00 crc kubenswrapper[4720]: E0202 09:30:00.148905 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="registry-server" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.148913 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="registry-server" Feb 02 09:30:00 crc kubenswrapper[4720]: E0202 09:30:00.148926 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="extract-utilities" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.148934 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="extract-utilities" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.149175 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa6d2f5-f447-4311-9706-99877c6ae159" containerName="registry-server" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.149873 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.153935 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.153957 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.179025 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5"] Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.315128 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.340961 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8zz\" (UniqueName: \"kubernetes.io/projected/bee8905a-743c-47e7-87d0-94380429512f-kube-api-access-dg8zz\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.341040 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee8905a-743c-47e7-87d0-94380429512f-config-volume\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.341070 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee8905a-743c-47e7-87d0-94380429512f-secret-volume\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.362057 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhhxc"] Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.443624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee8905a-743c-47e7-87d0-94380429512f-config-volume\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.444089 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee8905a-743c-47e7-87d0-94380429512f-secret-volume\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.444429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8zz\" (UniqueName: \"kubernetes.io/projected/bee8905a-743c-47e7-87d0-94380429512f-kube-api-access-dg8zz\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.444824 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee8905a-743c-47e7-87d0-94380429512f-config-volume\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.452870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee8905a-743c-47e7-87d0-94380429512f-secret-volume\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.466592 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8zz\" (UniqueName: \"kubernetes.io/projected/bee8905a-743c-47e7-87d0-94380429512f-kube-api-access-dg8zz\") pod \"collect-profiles-29500410-sw6x5\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.470048 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:00 crc kubenswrapper[4720]: I0202 09:30:00.955601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5"] Feb 02 09:30:01 crc kubenswrapper[4720]: I0202 09:30:01.276804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" event={"ID":"bee8905a-743c-47e7-87d0-94380429512f","Type":"ContainerStarted","Data":"1eeebac1b026f8feb0ef38ddba0df7dff61237ecb5118dc30ebd0158a5b5e73f"} Feb 02 09:30:01 crc kubenswrapper[4720]: I0202 09:30:01.277185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" event={"ID":"bee8905a-743c-47e7-87d0-94380429512f","Type":"ContainerStarted","Data":"f15b7dd6032c846962c6c586bb314e560c25d8c1a6c80932e051445a4637c16a"} Feb 02 09:30:01 crc kubenswrapper[4720]: I0202 09:30:01.306591 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" podStartSLOduration=1.306572972 podStartE2EDuration="1.306572972s" podCreationTimestamp="2026-02-02 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 09:30:01.297198166 +0000 UTC m=+2035.152823712" watchObservedRunningTime="2026-02-02 09:30:01.306572972 +0000 UTC m=+2035.162198528" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.289601 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" event={"ID":"bee8905a-743c-47e7-87d0-94380429512f","Type":"ContainerDied","Data":"1eeebac1b026f8feb0ef38ddba0df7dff61237ecb5118dc30ebd0158a5b5e73f"} Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.289717 4720 generic.go:334] "Generic (PLEG): container finished" podID="bee8905a-743c-47e7-87d0-94380429512f" containerID="1eeebac1b026f8feb0ef38ddba0df7dff61237ecb5118dc30ebd0158a5b5e73f" exitCode=0 Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.290263 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhhxc" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="registry-server" containerID="cri-o://94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0" gracePeriod=2 Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.744077 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.793407 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-catalog-content\") pod \"0cf8570c-5e87-4c79-a908-60312c292756\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.793868 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glwjn\" (UniqueName: \"kubernetes.io/projected/0cf8570c-5e87-4c79-a908-60312c292756-kube-api-access-glwjn\") pod \"0cf8570c-5e87-4c79-a908-60312c292756\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.794201 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-utilities\") pod \"0cf8570c-5e87-4c79-a908-60312c292756\" (UID: \"0cf8570c-5e87-4c79-a908-60312c292756\") " Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.795126 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-utilities" (OuterVolumeSpecName: "utilities") pod "0cf8570c-5e87-4c79-a908-60312c292756" (UID: "0cf8570c-5e87-4c79-a908-60312c292756"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.805263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf8570c-5e87-4c79-a908-60312c292756-kube-api-access-glwjn" (OuterVolumeSpecName: "kube-api-access-glwjn") pod "0cf8570c-5e87-4c79-a908-60312c292756" (UID: "0cf8570c-5e87-4c79-a908-60312c292756"). InnerVolumeSpecName "kube-api-access-glwjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.827080 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf8570c-5e87-4c79-a908-60312c292756" (UID: "0cf8570c-5e87-4c79-a908-60312c292756"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.896954 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.897002 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glwjn\" (UniqueName: \"kubernetes.io/projected/0cf8570c-5e87-4c79-a908-60312c292756-kube-api-access-glwjn\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:02 crc kubenswrapper[4720]: I0202 09:30:02.897013 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf8570c-5e87-4c79-a908-60312c292756-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.304038 4720 generic.go:334] "Generic (PLEG): container finished" podID="0cf8570c-5e87-4c79-a908-60312c292756" containerID="94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0" exitCode=0 Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.304157 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhhxc" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.304218 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerDied","Data":"94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0"} Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.304266 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhhxc" event={"ID":"0cf8570c-5e87-4c79-a908-60312c292756","Type":"ContainerDied","Data":"0223a893dbdf035c3510c5ddb26d4a0829bf7258218e579eb04c256a62657181"} Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.304299 4720 scope.go:117] "RemoveContainer" containerID="94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.342664 4720 scope.go:117] "RemoveContainer" containerID="e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.346082 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhhxc"] Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.366613 4720 scope.go:117] "RemoveContainer" containerID="800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.371693 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhhxc"] Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.414760 4720 scope.go:117] "RemoveContainer" containerID="94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0" Feb 02 09:30:03 crc kubenswrapper[4720]: E0202 09:30:03.415457 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0\": container with ID starting with 94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0 not found: ID does not exist" containerID="94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.415564 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0"} err="failed to get container status \"94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0\": rpc error: code = NotFound desc = could not find container \"94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0\": container with ID starting with 94f0ea6f75507838c4b09ddfccd6b0bd08be76ba5a83b81a2afe5f12e714ffa0 not found: ID does not exist" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.415825 4720 scope.go:117] "RemoveContainer" containerID="e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837" Feb 02 09:30:03 crc kubenswrapper[4720]: E0202 09:30:03.416178 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837\": container with ID starting with e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837 not found: ID does not exist" containerID="e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.416284 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837"} err="failed to get container status \"e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837\": rpc error: code = NotFound desc = could not find container \"e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837\": container with ID starting with e40033fda602c29fa0846aca59e694073402faf9a476c7efa1827a0ccbdc2837 not found: ID does not exist" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.416368 4720 scope.go:117] "RemoveContainer" containerID="800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735" Feb 02 09:30:03 crc kubenswrapper[4720]: E0202 09:30:03.416753 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735\": container with ID starting with 800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735 not found: ID does not exist" containerID="800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.416846 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735"} err="failed to get container status \"800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735\": rpc error: code = NotFound desc = could not find container \"800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735\": container with ID starting with 800414414e584b743dd4a008cfbc2676a2173b2e7abce67900d95693a5132735 not found: ID does not exist" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.667008 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.814382 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee8905a-743c-47e7-87d0-94380429512f-secret-volume\") pod \"bee8905a-743c-47e7-87d0-94380429512f\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.814846 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee8905a-743c-47e7-87d0-94380429512f-config-volume\") pod \"bee8905a-743c-47e7-87d0-94380429512f\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.815074 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg8zz\" (UniqueName: \"kubernetes.io/projected/bee8905a-743c-47e7-87d0-94380429512f-kube-api-access-dg8zz\") pod \"bee8905a-743c-47e7-87d0-94380429512f\" (UID: \"bee8905a-743c-47e7-87d0-94380429512f\") " Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.816958 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee8905a-743c-47e7-87d0-94380429512f-config-volume" (OuterVolumeSpecName: "config-volume") pod "bee8905a-743c-47e7-87d0-94380429512f" (UID: "bee8905a-743c-47e7-87d0-94380429512f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.820200 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee8905a-743c-47e7-87d0-94380429512f-kube-api-access-dg8zz" (OuterVolumeSpecName: "kube-api-access-dg8zz") pod "bee8905a-743c-47e7-87d0-94380429512f" (UID: "bee8905a-743c-47e7-87d0-94380429512f"). InnerVolumeSpecName "kube-api-access-dg8zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.820404 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee8905a-743c-47e7-87d0-94380429512f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bee8905a-743c-47e7-87d0-94380429512f" (UID: "bee8905a-743c-47e7-87d0-94380429512f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.916976 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg8zz\" (UniqueName: \"kubernetes.io/projected/bee8905a-743c-47e7-87d0-94380429512f-kube-api-access-dg8zz\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.917222 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee8905a-743c-47e7-87d0-94380429512f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:03 crc kubenswrapper[4720]: I0202 09:30:03.917290 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee8905a-743c-47e7-87d0-94380429512f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.316099 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" event={"ID":"bee8905a-743c-47e7-87d0-94380429512f","Type":"ContainerDied","Data":"f15b7dd6032c846962c6c586bb314e560c25d8c1a6c80932e051445a4637c16a"} Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.316107 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5" Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.316160 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15b7dd6032c846962c6c586bb314e560c25d8c1a6c80932e051445a4637c16a" Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.370555 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp"] Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.378848 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500365-jllzp"] Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.899626 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf8570c-5e87-4c79-a908-60312c292756" path="/var/lib/kubelet/pods/0cf8570c-5e87-4c79-a908-60312c292756/volumes" Feb 02 09:30:04 crc kubenswrapper[4720]: I0202 09:30:04.901162 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37615bea-3d49-45d6-b190-450e2e078977" path="/var/lib/kubelet/pods/37615bea-3d49-45d6-b190-450e2e078977/volumes" Feb 02 09:30:15 crc kubenswrapper[4720]: I0202 09:30:15.435382 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a454a20-f16c-4627-8c70-65e3ea30a26d" containerID="818cba41c3cb67e9859d8ac5353f00fd5da0840b477763989af04137224bc960" exitCode=0 Feb 02 09:30:15 crc kubenswrapper[4720]: I0202 09:30:15.435462 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" event={"ID":"6a454a20-f16c-4627-8c70-65e3ea30a26d","Type":"ContainerDied","Data":"818cba41c3cb67e9859d8ac5353f00fd5da0840b477763989af04137224bc960"} Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.847127 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.892958 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzcgt\" (UniqueName: \"kubernetes.io/projected/6a454a20-f16c-4627-8c70-65e3ea30a26d-kube-api-access-zzcgt\") pod \"6a454a20-f16c-4627-8c70-65e3ea30a26d\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.893003 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ssh-key-openstack-edpm-ipam\") pod \"6a454a20-f16c-4627-8c70-65e3ea30a26d\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.893059 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovn-combined-ca-bundle\") pod \"6a454a20-f16c-4627-8c70-65e3ea30a26d\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.893115 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovncontroller-config-0\") pod \"6a454a20-f16c-4627-8c70-65e3ea30a26d\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.893290 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-inventory\") pod \"6a454a20-f16c-4627-8c70-65e3ea30a26d\" (UID: \"6a454a20-f16c-4627-8c70-65e3ea30a26d\") " Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.904686 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6a454a20-f16c-4627-8c70-65e3ea30a26d" (UID: "6a454a20-f16c-4627-8c70-65e3ea30a26d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.904858 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a454a20-f16c-4627-8c70-65e3ea30a26d-kube-api-access-zzcgt" (OuterVolumeSpecName: "kube-api-access-zzcgt") pod "6a454a20-f16c-4627-8c70-65e3ea30a26d" (UID: "6a454a20-f16c-4627-8c70-65e3ea30a26d"). InnerVolumeSpecName "kube-api-access-zzcgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.922751 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a454a20-f16c-4627-8c70-65e3ea30a26d" (UID: "6a454a20-f16c-4627-8c70-65e3ea30a26d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.926590 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-inventory" (OuterVolumeSpecName: "inventory") pod "6a454a20-f16c-4627-8c70-65e3ea30a26d" (UID: "6a454a20-f16c-4627-8c70-65e3ea30a26d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.928177 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "6a454a20-f16c-4627-8c70-65e3ea30a26d" (UID: "6a454a20-f16c-4627-8c70-65e3ea30a26d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.995672 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzcgt\" (UniqueName: \"kubernetes.io/projected/6a454a20-f16c-4627-8c70-65e3ea30a26d-kube-api-access-zzcgt\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.995719 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.995735 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.995750 4720 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6a454a20-f16c-4627-8c70-65e3ea30a26d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:16 crc kubenswrapper[4720]: I0202 09:30:16.995764 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a454a20-f16c-4627-8c70-65e3ea30a26d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.462864 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" event={"ID":"6a454a20-f16c-4627-8c70-65e3ea30a26d","Type":"ContainerDied","Data":"6f3e5166b716891699d268e2a6dc882799d3ed03f3e1878039373cd57b9ffeb3"} Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.462937 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3e5166b716891699d268e2a6dc882799d3ed03f3e1878039373cd57b9ffeb3" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.463018 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rss67" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.556982 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt"] Feb 02 09:30:17 crc kubenswrapper[4720]: E0202 09:30:17.559179 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a454a20-f16c-4627-8c70-65e3ea30a26d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.559209 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a454a20-f16c-4627-8c70-65e3ea30a26d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 09:30:17 crc kubenswrapper[4720]: E0202 09:30:17.559229 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="registry-server" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.559238 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="registry-server" Feb 02 09:30:17 crc kubenswrapper[4720]: E0202 09:30:17.559271 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee8905a-743c-47e7-87d0-94380429512f" containerName="collect-profiles" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.559277 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee8905a-743c-47e7-87d0-94380429512f" containerName="collect-profiles" Feb 02 09:30:17 crc kubenswrapper[4720]: E0202 09:30:17.559307 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="extract-utilities" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.559313 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="extract-utilities" Feb 02 09:30:17 crc kubenswrapper[4720]: E0202 09:30:17.559332 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="extract-content" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.559338 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="extract-content" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.560263 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a454a20-f16c-4627-8c70-65e3ea30a26d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.560348 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf8570c-5e87-4c79-a908-60312c292756" containerName="registry-server" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.560417 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee8905a-743c-47e7-87d0-94380429512f" containerName="collect-profiles" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.562270 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.565898 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.566067 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.566176 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.566278 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.566391 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.566573 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.576731 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt"] Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.707797 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.707850 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprh4\" (UniqueName: \"kubernetes.io/projected/b9e622f6-37ab-46e2-98f9-475b39bc469a-kube-api-access-bprh4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.707897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.708011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.708075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.708210 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.810271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.810415 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.810497 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.810699 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.810810 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bprh4\" (UniqueName: \"kubernetes.io/projected/b9e622f6-37ab-46e2-98f9-475b39bc469a-kube-api-access-bprh4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.810947 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.817006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.817373 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.818120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.818946 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.820785 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.842161 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bprh4\" (UniqueName: \"kubernetes.io/projected/b9e622f6-37ab-46e2-98f9-475b39bc469a-kube-api-access-bprh4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:17 crc kubenswrapper[4720]: I0202 09:30:17.885626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:30:18 crc kubenswrapper[4720]: I0202 09:30:18.451031 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt"] Feb 02 09:30:18 crc kubenswrapper[4720]: I0202 09:30:18.473957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" event={"ID":"b9e622f6-37ab-46e2-98f9-475b39bc469a","Type":"ContainerStarted","Data":"f82ebb70093eef5db2280d7f9255efc468f9e73351f3d272ff4c8a4c1db11fed"} Feb 02 09:30:19 crc kubenswrapper[4720]: I0202 09:30:19.491293 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" event={"ID":"b9e622f6-37ab-46e2-98f9-475b39bc469a","Type":"ContainerStarted","Data":"e2bfd2c8c55e2d7ae1a2a4d9b96a17d7569a3e90ddf90314c7c909fc350536cb"} Feb 02 09:30:19 crc kubenswrapper[4720]: I0202 09:30:19.525648 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" podStartSLOduration=1.996198881 podStartE2EDuration="2.52562421s" podCreationTimestamp="2026-02-02 09:30:17 +0000 UTC" firstStartedPulling="2026-02-02 09:30:18.445242181 +0000 UTC m=+2052.300867737" lastFinishedPulling="2026-02-02 09:30:18.9746675 +0000 UTC m=+2052.830293066" observedRunningTime="2026-02-02 09:30:19.514332907 +0000 UTC m=+2053.369958473" watchObservedRunningTime="2026-02-02 09:30:19.52562421 +0000 UTC m=+2053.381249786" Feb 02 09:30:19 crc kubenswrapper[4720]: I0202 09:30:19.612164 4720 scope.go:117] "RemoveContainer" containerID="384d267a4ffc7043539cbe62adf85065eab6bdf2cc90b9895fbb7f032a7a7b8d" Feb 02 09:31:04 crc kubenswrapper[4720]: I0202 09:31:04.937082 4720 generic.go:334] "Generic (PLEG): container finished" podID="b9e622f6-37ab-46e2-98f9-475b39bc469a" containerID="e2bfd2c8c55e2d7ae1a2a4d9b96a17d7569a3e90ddf90314c7c909fc350536cb" exitCode=0 Feb 02 09:31:04 crc kubenswrapper[4720]: I0202 09:31:04.937185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" event={"ID":"b9e622f6-37ab-46e2-98f9-475b39bc469a","Type":"ContainerDied","Data":"e2bfd2c8c55e2d7ae1a2a4d9b96a17d7569a3e90ddf90314c7c909fc350536cb"} Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.374167 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.552338 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-metadata-combined-ca-bundle\") pod \"b9e622f6-37ab-46e2-98f9-475b39bc469a\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.552521 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-nova-metadata-neutron-config-0\") pod \"b9e622f6-37ab-46e2-98f9-475b39bc469a\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.552652 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-inventory\") pod \"b9e622f6-37ab-46e2-98f9-475b39bc469a\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.552763 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-ssh-key-openstack-edpm-ipam\") pod \"b9e622f6-37ab-46e2-98f9-475b39bc469a\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.552849 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bprh4\" (UniqueName: \"kubernetes.io/projected/b9e622f6-37ab-46e2-98f9-475b39bc469a-kube-api-access-bprh4\") pod \"b9e622f6-37ab-46e2-98f9-475b39bc469a\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.552958 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"b9e622f6-37ab-46e2-98f9-475b39bc469a\" (UID: \"b9e622f6-37ab-46e2-98f9-475b39bc469a\") " Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.558669 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e622f6-37ab-46e2-98f9-475b39bc469a-kube-api-access-bprh4" (OuterVolumeSpecName: "kube-api-access-bprh4") pod "b9e622f6-37ab-46e2-98f9-475b39bc469a" (UID: "b9e622f6-37ab-46e2-98f9-475b39bc469a"). InnerVolumeSpecName "kube-api-access-bprh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.573227 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b9e622f6-37ab-46e2-98f9-475b39bc469a" (UID: "b9e622f6-37ab-46e2-98f9-475b39bc469a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.584530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b9e622f6-37ab-46e2-98f9-475b39bc469a" (UID: "b9e622f6-37ab-46e2-98f9-475b39bc469a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.586850 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "b9e622f6-37ab-46e2-98f9-475b39bc469a" (UID: "b9e622f6-37ab-46e2-98f9-475b39bc469a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.587857 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-inventory" (OuterVolumeSpecName: "inventory") pod "b9e622f6-37ab-46e2-98f9-475b39bc469a" (UID: "b9e622f6-37ab-46e2-98f9-475b39bc469a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.588947 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "b9e622f6-37ab-46e2-98f9-475b39bc469a" (UID: "b9e622f6-37ab-46e2-98f9-475b39bc469a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.655811 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.656115 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.656210 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bprh4\" (UniqueName: \"kubernetes.io/projected/b9e622f6-37ab-46e2-98f9-475b39bc469a-kube-api-access-bprh4\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.656274 4720 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.656331 4720 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.656397 4720 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b9e622f6-37ab-46e2-98f9-475b39bc469a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.956483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" event={"ID":"b9e622f6-37ab-46e2-98f9-475b39bc469a","Type":"ContainerDied","Data":"f82ebb70093eef5db2280d7f9255efc468f9e73351f3d272ff4c8a4c1db11fed"} Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.956535 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82ebb70093eef5db2280d7f9255efc468f9e73351f3d272ff4c8a4c1db11fed" Feb 02 09:31:06 crc kubenswrapper[4720]: I0202 09:31:06.956565 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.122823 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l"] Feb 02 09:31:07 crc kubenswrapper[4720]: E0202 09:31:07.123612 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e622f6-37ab-46e2-98f9-475b39bc469a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.123637 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e622f6-37ab-46e2-98f9-475b39bc469a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.123867 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e622f6-37ab-46e2-98f9-475b39bc469a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.124695 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.138133 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l"] Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.138258 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.138299 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.138379 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.138388 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.138492 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.269401 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.269485 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.269534 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.269564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.269687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6bq\" (UniqueName: \"kubernetes.io/projected/4ea7861d-22fa-43cc-ad91-d700bd7e025b-kube-api-access-4v6bq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.372439 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6bq\" (UniqueName: \"kubernetes.io/projected/4ea7861d-22fa-43cc-ad91-d700bd7e025b-kube-api-access-4v6bq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.372923 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.373061 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.373232 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.373379 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.378998 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.379188 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.379919 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.383525 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.402731 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6bq\" (UniqueName: \"kubernetes.io/projected/4ea7861d-22fa-43cc-ad91-d700bd7e025b-kube-api-access-4v6bq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:07 crc kubenswrapper[4720]: I0202 09:31:07.444644 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:31:08 crc kubenswrapper[4720]: I0202 09:31:08.053024 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l"] Feb 02 09:31:09 crc kubenswrapper[4720]: I0202 09:31:09.006208 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" event={"ID":"4ea7861d-22fa-43cc-ad91-d700bd7e025b","Type":"ContainerStarted","Data":"20e86676ed4603a9671e7c7bc2e081f7ef139162324e7469cc785117e3686534"} Feb 02 09:31:09 crc kubenswrapper[4720]: I0202 09:31:09.006693 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" event={"ID":"4ea7861d-22fa-43cc-ad91-d700bd7e025b","Type":"ContainerStarted","Data":"9d77597eadd1f63022634aeb5e97d2734112cd57a181108a605aaf925e53256b"} Feb 02 09:31:09 crc kubenswrapper[4720]: I0202 09:31:09.032432 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" podStartSLOduration=1.618116331 podStartE2EDuration="2.032413077s" podCreationTimestamp="2026-02-02 09:31:07 +0000 UTC" firstStartedPulling="2026-02-02 09:31:08.065737257 +0000 UTC m=+2101.921362813" lastFinishedPulling="2026-02-02 09:31:08.480034003 +0000 UTC m=+2102.335659559" observedRunningTime="2026-02-02 09:31:09.020034727 +0000 UTC m=+2102.875660303" watchObservedRunningTime="2026-02-02 09:31:09.032413077 +0000 UTC m=+2102.888038643" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.290105 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bm5g8"] Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.292945 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.328204 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bm5g8"] Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.335378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mfbw\" (UniqueName: \"kubernetes.io/projected/7ce14e4a-a060-4533-aa95-907f718d7473-kube-api-access-8mfbw\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.335420 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-utilities\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.335644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-catalog-content\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.437538 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mfbw\" (UniqueName: \"kubernetes.io/projected/7ce14e4a-a060-4533-aa95-907f718d7473-kube-api-access-8mfbw\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.437596 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-utilities\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.437703 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-catalog-content\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.438219 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-catalog-content\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.438303 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-utilities\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.458869 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mfbw\" (UniqueName: \"kubernetes.io/projected/7ce14e4a-a060-4533-aa95-907f718d7473-kube-api-access-8mfbw\") pod \"community-operators-bm5g8\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:18 crc kubenswrapper[4720]: I0202 09:31:18.623851 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:19 crc kubenswrapper[4720]: I0202 09:31:19.122401 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bm5g8"] Feb 02 09:31:20 crc kubenswrapper[4720]: I0202 09:31:20.127477 4720 generic.go:334] "Generic (PLEG): container finished" podID="7ce14e4a-a060-4533-aa95-907f718d7473" containerID="c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760" exitCode=0 Feb 02 09:31:20 crc kubenswrapper[4720]: I0202 09:31:20.128245 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bm5g8" event={"ID":"7ce14e4a-a060-4533-aa95-907f718d7473","Type":"ContainerDied","Data":"c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760"} Feb 02 09:31:20 crc kubenswrapper[4720]: I0202 09:31:20.128401 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bm5g8" event={"ID":"7ce14e4a-a060-4533-aa95-907f718d7473","Type":"ContainerStarted","Data":"94c574f841d0e3d9bfa95b416c82d666d9fce568acaa3cf8b44da5228c605bcb"} Feb 02 09:31:22 crc kubenswrapper[4720]: I0202 09:31:22.154039 4720 generic.go:334] "Generic (PLEG): container finished" podID="7ce14e4a-a060-4533-aa95-907f718d7473" containerID="a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f" exitCode=0 Feb 02 09:31:22 crc kubenswrapper[4720]: I0202 09:31:22.154152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bm5g8" event={"ID":"7ce14e4a-a060-4533-aa95-907f718d7473","Type":"ContainerDied","Data":"a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f"} Feb 02 09:31:23 crc kubenswrapper[4720]: I0202 09:31:23.166284 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bm5g8" event={"ID":"7ce14e4a-a060-4533-aa95-907f718d7473","Type":"ContainerStarted","Data":"7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e"} Feb 02 09:31:23 crc kubenswrapper[4720]: I0202 09:31:23.187992 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bm5g8" podStartSLOduration=2.758001039 podStartE2EDuration="5.187972536s" podCreationTimestamp="2026-02-02 09:31:18 +0000 UTC" firstStartedPulling="2026-02-02 09:31:20.134291991 +0000 UTC m=+2113.989917597" lastFinishedPulling="2026-02-02 09:31:22.564263538 +0000 UTC m=+2116.419889094" observedRunningTime="2026-02-02 09:31:23.1819106 +0000 UTC m=+2117.037536156" watchObservedRunningTime="2026-02-02 09:31:23.187972536 +0000 UTC m=+2117.043598092" Feb 02 09:31:28 crc kubenswrapper[4720]: I0202 09:31:28.625257 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:28 crc kubenswrapper[4720]: I0202 09:31:28.626090 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:28 crc kubenswrapper[4720]: I0202 09:31:28.700650 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:29 crc kubenswrapper[4720]: I0202 09:31:29.305476 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:29 crc kubenswrapper[4720]: I0202 09:31:29.374535 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bm5g8"] Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.244143 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bm5g8" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="registry-server" containerID="cri-o://7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e" gracePeriod=2 Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.699234 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.815822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mfbw\" (UniqueName: \"kubernetes.io/projected/7ce14e4a-a060-4533-aa95-907f718d7473-kube-api-access-8mfbw\") pod \"7ce14e4a-a060-4533-aa95-907f718d7473\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.815957 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-catalog-content\") pod \"7ce14e4a-a060-4533-aa95-907f718d7473\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.816004 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-utilities\") pod \"7ce14e4a-a060-4533-aa95-907f718d7473\" (UID: \"7ce14e4a-a060-4533-aa95-907f718d7473\") " Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.817001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-utilities" (OuterVolumeSpecName: "utilities") pod "7ce14e4a-a060-4533-aa95-907f718d7473" (UID: "7ce14e4a-a060-4533-aa95-907f718d7473"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.820870 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce14e4a-a060-4533-aa95-907f718d7473-kube-api-access-8mfbw" (OuterVolumeSpecName: "kube-api-access-8mfbw") pod "7ce14e4a-a060-4533-aa95-907f718d7473" (UID: "7ce14e4a-a060-4533-aa95-907f718d7473"). InnerVolumeSpecName "kube-api-access-8mfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.919092 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mfbw\" (UniqueName: \"kubernetes.io/projected/7ce14e4a-a060-4533-aa95-907f718d7473-kube-api-access-8mfbw\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:31 crc kubenswrapper[4720]: I0202 09:31:31.919552 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.109289 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ce14e4a-a060-4533-aa95-907f718d7473" (UID: "7ce14e4a-a060-4533-aa95-907f718d7473"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.124389 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce14e4a-a060-4533-aa95-907f718d7473-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.254384 4720 generic.go:334] "Generic (PLEG): container finished" podID="7ce14e4a-a060-4533-aa95-907f718d7473" containerID="7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e" exitCode=0 Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.254439 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bm5g8" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.254459 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bm5g8" event={"ID":"7ce14e4a-a060-4533-aa95-907f718d7473","Type":"ContainerDied","Data":"7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e"} Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.255914 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bm5g8" event={"ID":"7ce14e4a-a060-4533-aa95-907f718d7473","Type":"ContainerDied","Data":"94c574f841d0e3d9bfa95b416c82d666d9fce568acaa3cf8b44da5228c605bcb"} Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.255938 4720 scope.go:117] "RemoveContainer" containerID="7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.283840 4720 scope.go:117] "RemoveContainer" containerID="a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.309124 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bm5g8"] Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.315591 4720 scope.go:117] "RemoveContainer" containerID="c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.319096 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bm5g8"] Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.356106 4720 scope.go:117] "RemoveContainer" containerID="7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e" Feb 02 09:31:32 crc kubenswrapper[4720]: E0202 09:31:32.356603 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e\": container with ID starting with 7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e not found: ID does not exist" containerID="7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.356643 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e"} err="failed to get container status \"7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e\": rpc error: code = NotFound desc = could not find container \"7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e\": container with ID starting with 7574d7a348bf1406a64f23752774be9412ecadd838cbe9951a318642703c6e9e not found: ID does not exist" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.356676 4720 scope.go:117] "RemoveContainer" containerID="a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f" Feb 02 09:31:32 crc kubenswrapper[4720]: E0202 09:31:32.357191 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f\": container with ID starting with a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f not found: ID does not exist" containerID="a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.357225 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f"} err="failed to get container status \"a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f\": rpc error: code = NotFound desc = could not find container \"a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f\": container with ID starting with a276edc56e3fcacfdaaf90bf54ca0d1cee339a76e6e4b8005df876792ef6879f not found: ID does not exist" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.357248 4720 scope.go:117] "RemoveContainer" containerID="c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760" Feb 02 09:31:32 crc kubenswrapper[4720]: E0202 09:31:32.357617 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760\": container with ID starting with c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760 not found: ID does not exist" containerID="c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.357644 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760"} err="failed to get container status \"c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760\": rpc error: code = NotFound desc = could not find container \"c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760\": container with ID starting with c9e303d31b1c93a2337adf3b552f007f1bae2ad63db46cccb6a1f1ef0d4a6760 not found: ID does not exist" Feb 02 09:31:32 crc kubenswrapper[4720]: I0202 09:31:32.897297 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" path="/var/lib/kubelet/pods/7ce14e4a-a060-4533-aa95-907f718d7473/volumes" Feb 02 09:31:47 crc kubenswrapper[4720]: I0202 09:31:47.902267 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:31:47 crc kubenswrapper[4720]: I0202 09:31:47.902844 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:32:17 crc kubenswrapper[4720]: I0202 09:32:17.901853 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:32:17 crc kubenswrapper[4720]: I0202 09:32:17.902525 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:32:47 crc kubenswrapper[4720]: I0202 09:32:47.902198 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:32:47 crc kubenswrapper[4720]: I0202 09:32:47.902973 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:32:47 crc kubenswrapper[4720]: I0202 09:32:47.903042 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:32:47 crc kubenswrapper[4720]: I0202 09:32:47.904200 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3abce72ff32e36d5e24d37c94c08f594bd64efc91ca435e3c2ca1d3db2b3b20f"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:32:47 crc kubenswrapper[4720]: I0202 09:32:47.904308 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://3abce72ff32e36d5e24d37c94c08f594bd64efc91ca435e3c2ca1d3db2b3b20f" gracePeriod=600 Feb 02 09:32:48 crc kubenswrapper[4720]: I0202 09:32:48.046614 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="3abce72ff32e36d5e24d37c94c08f594bd64efc91ca435e3c2ca1d3db2b3b20f" exitCode=0 Feb 02 09:32:48 crc kubenswrapper[4720]: I0202 09:32:48.046657 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"3abce72ff32e36d5e24d37c94c08f594bd64efc91ca435e3c2ca1d3db2b3b20f"} Feb 02 09:32:48 crc kubenswrapper[4720]: I0202 09:32:48.046692 4720 scope.go:117] "RemoveContainer" containerID="5e67421f09b6ab99e7723384da5299a71a7fcb31f4bd33566582e7f039a39c97" Feb 02 09:32:49 crc kubenswrapper[4720]: I0202 09:32:49.057782 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf"} Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.914352 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7d64"] Feb 02 09:32:53 crc kubenswrapper[4720]: E0202 09:32:53.915408 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="registry-server" Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.915424 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="registry-server" Feb 02 09:32:53 crc kubenswrapper[4720]: E0202 09:32:53.915464 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="extract-utilities" Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.915472 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="extract-utilities" Feb 02 09:32:53 crc kubenswrapper[4720]: E0202 09:32:53.915486 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="extract-content" Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.915493 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="extract-content" Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.915685 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce14e4a-a060-4533-aa95-907f718d7473" containerName="registry-server" Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.917568 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:53 crc kubenswrapper[4720]: I0202 09:32:53.931078 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7d64"] Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.029101 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv44x\" (UniqueName: \"kubernetes.io/projected/032ab383-5d92-4675-b1a2-9aec9bfd87af-kube-api-access-pv44x\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.029229 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-utilities\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.029358 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-catalog-content\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.130695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-utilities\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.130844 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-catalog-content\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.130922 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv44x\" (UniqueName: \"kubernetes.io/projected/032ab383-5d92-4675-b1a2-9aec9bfd87af-kube-api-access-pv44x\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.131325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-utilities\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.131347 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-catalog-content\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.167143 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv44x\" (UniqueName: \"kubernetes.io/projected/032ab383-5d92-4675-b1a2-9aec9bfd87af-kube-api-access-pv44x\") pod \"certified-operators-q7d64\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.244547 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:32:54 crc kubenswrapper[4720]: I0202 09:32:54.730508 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7d64"] Feb 02 09:32:55 crc kubenswrapper[4720]: I0202 09:32:55.131226 4720 generic.go:334] "Generic (PLEG): container finished" podID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerID="c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75" exitCode=0 Feb 02 09:32:55 crc kubenswrapper[4720]: I0202 09:32:55.131340 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7d64" event={"ID":"032ab383-5d92-4675-b1a2-9aec9bfd87af","Type":"ContainerDied","Data":"c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75"} Feb 02 09:32:55 crc kubenswrapper[4720]: I0202 09:32:55.131598 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7d64" event={"ID":"032ab383-5d92-4675-b1a2-9aec9bfd87af","Type":"ContainerStarted","Data":"2dd64a7704a9afef262fed74ceca9cee7ff7eb98796c08fc315f3ad3d1e059cf"} Feb 02 09:32:55 crc kubenswrapper[4720]: I0202 09:32:55.134129 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:32:57 crc kubenswrapper[4720]: I0202 09:32:57.164188 4720 generic.go:334] "Generic (PLEG): container finished" podID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerID="bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf" exitCode=0 Feb 02 09:32:57 crc kubenswrapper[4720]: I0202 09:32:57.164255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7d64" event={"ID":"032ab383-5d92-4675-b1a2-9aec9bfd87af","Type":"ContainerDied","Data":"bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf"} Feb 02 09:32:58 crc kubenswrapper[4720]: I0202 09:32:58.181436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7d64" event={"ID":"032ab383-5d92-4675-b1a2-9aec9bfd87af","Type":"ContainerStarted","Data":"341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d"} Feb 02 09:32:58 crc kubenswrapper[4720]: I0202 09:32:58.199384 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7d64" podStartSLOduration=2.767261538 podStartE2EDuration="5.199364341s" podCreationTimestamp="2026-02-02 09:32:53 +0000 UTC" firstStartedPulling="2026-02-02 09:32:55.133796285 +0000 UTC m=+2208.989421831" lastFinishedPulling="2026-02-02 09:32:57.565899078 +0000 UTC m=+2211.421524634" observedRunningTime="2026-02-02 09:32:58.197432253 +0000 UTC m=+2212.053057819" watchObservedRunningTime="2026-02-02 09:32:58.199364341 +0000 UTC m=+2212.054989897" Feb 02 09:33:04 crc kubenswrapper[4720]: I0202 09:33:04.245674 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:33:04 crc kubenswrapper[4720]: I0202 09:33:04.246539 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:33:04 crc kubenswrapper[4720]: I0202 09:33:04.299301 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:33:05 crc kubenswrapper[4720]: I0202 09:33:05.338949 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:33:05 crc kubenswrapper[4720]: I0202 09:33:05.402475 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7d64"] Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.281559 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7d64" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="registry-server" containerID="cri-o://341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d" gracePeriod=2 Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.741049 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.831288 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv44x\" (UniqueName: \"kubernetes.io/projected/032ab383-5d92-4675-b1a2-9aec9bfd87af-kube-api-access-pv44x\") pod \"032ab383-5d92-4675-b1a2-9aec9bfd87af\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.831696 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-utilities\") pod \"032ab383-5d92-4675-b1a2-9aec9bfd87af\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.831748 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-catalog-content\") pod \"032ab383-5d92-4675-b1a2-9aec9bfd87af\" (UID: \"032ab383-5d92-4675-b1a2-9aec9bfd87af\") " Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.832851 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-utilities" (OuterVolumeSpecName: "utilities") pod "032ab383-5d92-4675-b1a2-9aec9bfd87af" (UID: "032ab383-5d92-4675-b1a2-9aec9bfd87af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.838376 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032ab383-5d92-4675-b1a2-9aec9bfd87af-kube-api-access-pv44x" (OuterVolumeSpecName: "kube-api-access-pv44x") pod "032ab383-5d92-4675-b1a2-9aec9bfd87af" (UID: "032ab383-5d92-4675-b1a2-9aec9bfd87af"). InnerVolumeSpecName "kube-api-access-pv44x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.892588 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "032ab383-5d92-4675-b1a2-9aec9bfd87af" (UID: "032ab383-5d92-4675-b1a2-9aec9bfd87af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.934440 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.934934 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/032ab383-5d92-4675-b1a2-9aec9bfd87af-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:33:07 crc kubenswrapper[4720]: I0202 09:33:07.935053 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv44x\" (UniqueName: \"kubernetes.io/projected/032ab383-5d92-4675-b1a2-9aec9bfd87af-kube-api-access-pv44x\") on node \"crc\" DevicePath \"\"" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.297385 4720 generic.go:334] "Generic (PLEG): container finished" podID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerID="341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d" exitCode=0 Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.297423 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7d64" event={"ID":"032ab383-5d92-4675-b1a2-9aec9bfd87af","Type":"ContainerDied","Data":"341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d"} Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.297450 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7d64" event={"ID":"032ab383-5d92-4675-b1a2-9aec9bfd87af","Type":"ContainerDied","Data":"2dd64a7704a9afef262fed74ceca9cee7ff7eb98796c08fc315f3ad3d1e059cf"} Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.297469 4720 scope.go:117] "RemoveContainer" containerID="341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.298015 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7d64" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.340461 4720 scope.go:117] "RemoveContainer" containerID="bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.370118 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7d64"] Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.378022 4720 scope.go:117] "RemoveContainer" containerID="c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.383900 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7d64"] Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.429529 4720 scope.go:117] "RemoveContainer" containerID="341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d" Feb 02 09:33:08 crc kubenswrapper[4720]: E0202 09:33:08.429983 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d\": container with ID starting with 341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d not found: ID does not exist" containerID="341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.430015 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d"} err="failed to get container status \"341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d\": rpc error: code = NotFound desc = could not find container \"341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d\": container with ID starting with 341cbfc695c4c1611706a1bbee70728a7dcaba00bf54b8c567dce5d63bdc5b1d not found: ID does not exist" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.430042 4720 scope.go:117] "RemoveContainer" containerID="bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf" Feb 02 09:33:08 crc kubenswrapper[4720]: E0202 09:33:08.430793 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf\": container with ID starting with bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf not found: ID does not exist" containerID="bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.430829 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf"} err="failed to get container status \"bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf\": rpc error: code = NotFound desc = could not find container \"bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf\": container with ID starting with bb5212a036896c1bf27cbd9952c21128084546a89720ccc2caf4447d4ff958bf not found: ID does not exist" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.430849 4720 scope.go:117] "RemoveContainer" containerID="c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75" Feb 02 09:33:08 crc kubenswrapper[4720]: E0202 09:33:08.431395 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75\": container with ID starting with c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75 not found: ID does not exist" containerID="c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.431425 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75"} err="failed to get container status \"c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75\": rpc error: code = NotFound desc = could not find container \"c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75\": container with ID starting with c192dd26e1187de26be6d4bc6093ff526eaa2f0a25c189ef80cbfc80da40ca75 not found: ID does not exist" Feb 02 09:33:08 crc kubenswrapper[4720]: I0202 09:33:08.914106 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" path="/var/lib/kubelet/pods/032ab383-5d92-4675-b1a2-9aec9bfd87af/volumes" Feb 02 09:34:45 crc kubenswrapper[4720]: I0202 09:34:45.280015 4720 generic.go:334] "Generic (PLEG): container finished" podID="4ea7861d-22fa-43cc-ad91-d700bd7e025b" containerID="20e86676ed4603a9671e7c7bc2e081f7ef139162324e7469cc785117e3686534" exitCode=0 Feb 02 09:34:45 crc kubenswrapper[4720]: I0202 09:34:45.280113 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" event={"ID":"4ea7861d-22fa-43cc-ad91-d700bd7e025b","Type":"ContainerDied","Data":"20e86676ed4603a9671e7c7bc2e081f7ef139162324e7469cc785117e3686534"} Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.757403 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.911989 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-ssh-key-openstack-edpm-ipam\") pod \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.912126 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-secret-0\") pod \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.912192 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-combined-ca-bundle\") pod \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.912226 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v6bq\" (UniqueName: \"kubernetes.io/projected/4ea7861d-22fa-43cc-ad91-d700bd7e025b-kube-api-access-4v6bq\") pod \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.912354 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-inventory\") pod \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\" (UID: \"4ea7861d-22fa-43cc-ad91-d700bd7e025b\") " Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.917701 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4ea7861d-22fa-43cc-ad91-d700bd7e025b" (UID: "4ea7861d-22fa-43cc-ad91-d700bd7e025b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.923600 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea7861d-22fa-43cc-ad91-d700bd7e025b-kube-api-access-4v6bq" (OuterVolumeSpecName: "kube-api-access-4v6bq") pod "4ea7861d-22fa-43cc-ad91-d700bd7e025b" (UID: "4ea7861d-22fa-43cc-ad91-d700bd7e025b"). InnerVolumeSpecName "kube-api-access-4v6bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.945723 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-inventory" (OuterVolumeSpecName: "inventory") pod "4ea7861d-22fa-43cc-ad91-d700bd7e025b" (UID: "4ea7861d-22fa-43cc-ad91-d700bd7e025b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.949650 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ea7861d-22fa-43cc-ad91-d700bd7e025b" (UID: "4ea7861d-22fa-43cc-ad91-d700bd7e025b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:34:46 crc kubenswrapper[4720]: I0202 09:34:46.958506 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4ea7861d-22fa-43cc-ad91-d700bd7e025b" (UID: "4ea7861d-22fa-43cc-ad91-d700bd7e025b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.015376 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.015416 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.015430 4720 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.015444 4720 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea7861d-22fa-43cc-ad91-d700bd7e025b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.015458 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v6bq\" (UniqueName: \"kubernetes.io/projected/4ea7861d-22fa-43cc-ad91-d700bd7e025b-kube-api-access-4v6bq\") on node \"crc\" DevicePath \"\"" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.305328 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" event={"ID":"4ea7861d-22fa-43cc-ad91-d700bd7e025b","Type":"ContainerDied","Data":"9d77597eadd1f63022634aeb5e97d2734112cd57a181108a605aaf925e53256b"} Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.305377 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d77597eadd1f63022634aeb5e97d2734112cd57a181108a605aaf925e53256b" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.305425 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.415457 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh"] Feb 02 09:34:47 crc kubenswrapper[4720]: E0202 09:34:47.415813 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea7861d-22fa-43cc-ad91-d700bd7e025b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.415833 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea7861d-22fa-43cc-ad91-d700bd7e025b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 09:34:47 crc kubenswrapper[4720]: E0202 09:34:47.415857 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="extract-utilities" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.415865 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="extract-utilities" Feb 02 09:34:47 crc kubenswrapper[4720]: E0202 09:34:47.415904 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="registry-server" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.415910 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="registry-server" Feb 02 09:34:47 crc kubenswrapper[4720]: E0202 09:34:47.415931 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="extract-content" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.415937 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="extract-content" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.416120 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="032ab383-5d92-4675-b1a2-9aec9bfd87af" containerName="registry-server" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.416135 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea7861d-22fa-43cc-ad91-d700bd7e025b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.416716 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.419451 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.419788 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.419835 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.419802 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.420173 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.420255 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.420194 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.438726 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh"] Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.525518 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.525571 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.525625 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.525673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.525856 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.526011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.526043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.526064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jm2\" (UniqueName: \"kubernetes.io/projected/709087d8-ff60-4902-acf4-f4b23ffe4149-kube-api-access-d5jm2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.526090 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.627781 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.627856 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628435 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5jm2\" (UniqueName: \"kubernetes.io/projected/709087d8-ff60-4902-acf4-f4b23ffe4149-kube-api-access-d5jm2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628477 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628543 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628565 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628610 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628632 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.628661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.629414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.632295 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.633079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.633258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.633516 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.634296 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.634964 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.636197 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.648269 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5jm2\" (UniqueName: \"kubernetes.io/projected/709087d8-ff60-4902-acf4-f4b23ffe4149-kube-api-access-d5jm2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6h4jh\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:47 crc kubenswrapper[4720]: I0202 09:34:47.735980 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:34:48 crc kubenswrapper[4720]: I0202 09:34:48.100925 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh"] Feb 02 09:34:48 crc kubenswrapper[4720]: I0202 09:34:48.331214 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" event={"ID":"709087d8-ff60-4902-acf4-f4b23ffe4149","Type":"ContainerStarted","Data":"62feac5602a5a138ce26b0f1df822479f350cb771ab852939847c60cf992ba04"} Feb 02 09:34:49 crc kubenswrapper[4720]: I0202 09:34:49.343868 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" event={"ID":"709087d8-ff60-4902-acf4-f4b23ffe4149","Type":"ContainerStarted","Data":"7ed0d2ad3f60347377ac5a563b7a905ba070b188ba180e4f259ea2fa93c54275"} Feb 02 09:34:49 crc kubenswrapper[4720]: I0202 09:34:49.381090 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" podStartSLOduration=1.9087193230000001 podStartE2EDuration="2.381060818s" podCreationTimestamp="2026-02-02 09:34:47 +0000 UTC" firstStartedPulling="2026-02-02 09:34:48.103078315 +0000 UTC m=+2321.958703871" lastFinishedPulling="2026-02-02 09:34:48.57541976 +0000 UTC m=+2322.431045366" observedRunningTime="2026-02-02 09:34:49.371551936 +0000 UTC m=+2323.227177492" watchObservedRunningTime="2026-02-02 09:34:49.381060818 +0000 UTC m=+2323.236686404" Feb 02 09:35:17 crc kubenswrapper[4720]: I0202 09:35:17.901624 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:35:17 crc kubenswrapper[4720]: I0202 09:35:17.902153 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:35:47 crc kubenswrapper[4720]: I0202 09:35:47.901644 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:35:47 crc kubenswrapper[4720]: I0202 09:35:47.902367 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:36:17 crc kubenswrapper[4720]: I0202 09:36:17.901813 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:36:17 crc kubenswrapper[4720]: I0202 09:36:17.902623 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:36:17 crc kubenswrapper[4720]: I0202 09:36:17.902688 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:36:17 crc kubenswrapper[4720]: I0202 09:36:17.903763 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:36:17 crc kubenswrapper[4720]: I0202 09:36:17.903865 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" gracePeriod=600 Feb 02 09:36:18 crc kubenswrapper[4720]: E0202 09:36:18.032731 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:36:18 crc kubenswrapper[4720]: I0202 09:36:18.354790 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" exitCode=0 Feb 02 09:36:18 crc kubenswrapper[4720]: I0202 09:36:18.355104 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf"} Feb 02 09:36:18 crc kubenswrapper[4720]: I0202 09:36:18.355137 4720 scope.go:117] "RemoveContainer" containerID="3abce72ff32e36d5e24d37c94c08f594bd64efc91ca435e3c2ca1d3db2b3b20f" Feb 02 09:36:18 crc kubenswrapper[4720]: I0202 09:36:18.356023 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:36:18 crc kubenswrapper[4720]: E0202 09:36:18.356385 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:36:31 crc kubenswrapper[4720]: I0202 09:36:31.887203 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:36:31 crc kubenswrapper[4720]: E0202 09:36:31.888338 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:36:45 crc kubenswrapper[4720]: I0202 09:36:45.887631 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:36:45 crc kubenswrapper[4720]: E0202 09:36:45.888984 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:36:52 crc kubenswrapper[4720]: I0202 09:36:52.709605 4720 generic.go:334] "Generic (PLEG): container finished" podID="709087d8-ff60-4902-acf4-f4b23ffe4149" containerID="7ed0d2ad3f60347377ac5a563b7a905ba070b188ba180e4f259ea2fa93c54275" exitCode=0 Feb 02 09:36:52 crc kubenswrapper[4720]: I0202 09:36:52.709731 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" event={"ID":"709087d8-ff60-4902-acf4-f4b23ffe4149","Type":"ContainerDied","Data":"7ed0d2ad3f60347377ac5a563b7a905ba070b188ba180e4f259ea2fa93c54275"} Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.202574 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359459 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-0\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359547 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-extra-config-0\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359609 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-1\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-inventory\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359691 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5jm2\" (UniqueName: \"kubernetes.io/projected/709087d8-ff60-4902-acf4-f4b23ffe4149-kube-api-access-d5jm2\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359741 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-1\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359774 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-0\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.359867 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-combined-ca-bundle\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.360444 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-ssh-key-openstack-edpm-ipam\") pod \"709087d8-ff60-4902-acf4-f4b23ffe4149\" (UID: \"709087d8-ff60-4902-acf4-f4b23ffe4149\") " Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.365387 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.369056 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709087d8-ff60-4902-acf4-f4b23ffe4149-kube-api-access-d5jm2" (OuterVolumeSpecName: "kube-api-access-d5jm2") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "kube-api-access-d5jm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.390928 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.391648 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.391675 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.391743 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.392462 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.399669 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.405093 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-inventory" (OuterVolumeSpecName: "inventory") pod "709087d8-ff60-4902-acf4-f4b23ffe4149" (UID: "709087d8-ff60-4902-acf4-f4b23ffe4149"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.462938 4720 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.462975 4720 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.462984 4720 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.462994 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.463005 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5jm2\" (UniqueName: \"kubernetes.io/projected/709087d8-ff60-4902-acf4-f4b23ffe4149-kube-api-access-d5jm2\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.463015 4720 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.463037 4720 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.463046 4720 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.463055 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/709087d8-ff60-4902-acf4-f4b23ffe4149-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.733487 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" event={"ID":"709087d8-ff60-4902-acf4-f4b23ffe4149","Type":"ContainerDied","Data":"62feac5602a5a138ce26b0f1df822479f350cb771ab852939847c60cf992ba04"} Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.733531 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62feac5602a5a138ce26b0f1df822479f350cb771ab852939847c60cf992ba04" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.733595 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6h4jh" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.863063 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck"] Feb 02 09:36:54 crc kubenswrapper[4720]: E0202 09:36:54.863556 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709087d8-ff60-4902-acf4-f4b23ffe4149" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.863583 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="709087d8-ff60-4902-acf4-f4b23ffe4149" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.863819 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="709087d8-ff60-4902-acf4-f4b23ffe4149" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.864639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.866933 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.868435 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.868617 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.868785 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zpbp7" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.869076 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.871543 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck"] Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.972079 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.972224 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.972316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.972733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.972764 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.973053 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:54 crc kubenswrapper[4720]: I0202 09:36:54.973100 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzsj\" (UniqueName: \"kubernetes.io/projected/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-kube-api-access-nfzsj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.074868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.074975 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.075010 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.075067 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.075091 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzsj\" (UniqueName: \"kubernetes.io/projected/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-kube-api-access-nfzsj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.075149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.075194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.079604 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.079719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.079975 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.080314 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.081053 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.086408 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.092757 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzsj\" (UniqueName: \"kubernetes.io/projected/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-kube-api-access-nfzsj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w4mck\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.233625 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:36:55 crc kubenswrapper[4720]: W0202 09:36:55.807907 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6295be1_7d41_4b42_a8a6_7b18ff6bb04e.slice/crio-33c6d7477489a3006e220fda76e5af9b70ccc94189f10a9c3b8f976537c39ecc WatchSource:0}: Error finding container 33c6d7477489a3006e220fda76e5af9b70ccc94189f10a9c3b8f976537c39ecc: Status 404 returned error can't find the container with id 33c6d7477489a3006e220fda76e5af9b70ccc94189f10a9c3b8f976537c39ecc Feb 02 09:36:55 crc kubenswrapper[4720]: I0202 09:36:55.809324 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck"] Feb 02 09:36:56 crc kubenswrapper[4720]: I0202 09:36:56.752387 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" event={"ID":"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e","Type":"ContainerStarted","Data":"68c9a6e3b2ef191614649e3d2a3e94f45935d53e9fc9e9c4df944549516940f3"} Feb 02 09:36:56 crc kubenswrapper[4720]: I0202 09:36:56.752681 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" event={"ID":"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e","Type":"ContainerStarted","Data":"33c6d7477489a3006e220fda76e5af9b70ccc94189f10a9c3b8f976537c39ecc"} Feb 02 09:36:56 crc kubenswrapper[4720]: I0202 09:36:56.786248 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" podStartSLOduration=2.387851269 podStartE2EDuration="2.786211124s" podCreationTimestamp="2026-02-02 09:36:54 +0000 UTC" firstStartedPulling="2026-02-02 09:36:55.812205697 +0000 UTC m=+2449.667831273" lastFinishedPulling="2026-02-02 09:36:56.210565562 +0000 UTC m=+2450.066191128" observedRunningTime="2026-02-02 09:36:56.780919745 +0000 UTC m=+2450.636545351" watchObservedRunningTime="2026-02-02 09:36:56.786211124 +0000 UTC m=+2450.641836680" Feb 02 09:36:56 crc kubenswrapper[4720]: I0202 09:36:56.895523 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:36:56 crc kubenswrapper[4720]: E0202 09:36:56.897119 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:37:10 crc kubenswrapper[4720]: I0202 09:37:10.888638 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:37:10 crc kubenswrapper[4720]: E0202 09:37:10.889374 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:37:22 crc kubenswrapper[4720]: I0202 09:37:22.886665 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:37:22 crc kubenswrapper[4720]: E0202 09:37:22.888026 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:37:36 crc kubenswrapper[4720]: I0202 09:37:36.899608 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:37:36 crc kubenswrapper[4720]: E0202 09:37:36.905710 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:37:48 crc kubenswrapper[4720]: I0202 09:37:48.891994 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:37:48 crc kubenswrapper[4720]: E0202 09:37:48.892976 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:37:59 crc kubenswrapper[4720]: I0202 09:37:59.893727 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:37:59 crc kubenswrapper[4720]: E0202 09:37:59.895321 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:38:10 crc kubenswrapper[4720]: I0202 09:38:10.887335 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:38:10 crc kubenswrapper[4720]: E0202 09:38:10.888285 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:38:23 crc kubenswrapper[4720]: I0202 09:38:23.887547 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:38:23 crc kubenswrapper[4720]: E0202 09:38:23.888830 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:38:36 crc kubenswrapper[4720]: I0202 09:38:36.893685 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:38:36 crc kubenswrapper[4720]: E0202 09:38:36.894720 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:38:47 crc kubenswrapper[4720]: I0202 09:38:47.886691 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:38:47 crc kubenswrapper[4720]: E0202 09:38:47.888248 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:39:01 crc kubenswrapper[4720]: I0202 09:39:01.887681 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:39:01 crc kubenswrapper[4720]: E0202 09:39:01.890161 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:39:12 crc kubenswrapper[4720]: I0202 09:39:12.888939 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:39:12 crc kubenswrapper[4720]: E0202 09:39:12.891299 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:39:17 crc kubenswrapper[4720]: I0202 09:39:17.357841 4720 generic.go:334] "Generic (PLEG): container finished" podID="e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" containerID="68c9a6e3b2ef191614649e3d2a3e94f45935d53e9fc9e9c4df944549516940f3" exitCode=0 Feb 02 09:39:17 crc kubenswrapper[4720]: I0202 09:39:17.357918 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" event={"ID":"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e","Type":"ContainerDied","Data":"68c9a6e3b2ef191614649e3d2a3e94f45935d53e9fc9e9c4df944549516940f3"} Feb 02 09:39:18 crc kubenswrapper[4720]: I0202 09:39:18.858353 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.003266 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-inventory\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.003711 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-0\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.003802 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfzsj\" (UniqueName: \"kubernetes.io/projected/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-kube-api-access-nfzsj\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.003972 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-telemetry-combined-ca-bundle\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.004089 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ssh-key-openstack-edpm-ipam\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.004115 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-2\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.004153 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-1\") pod \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\" (UID: \"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e\") " Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.011988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.017042 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-kube-api-access-nfzsj" (OuterVolumeSpecName: "kube-api-access-nfzsj") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "kube-api-access-nfzsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.040326 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.040780 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-inventory" (OuterVolumeSpecName: "inventory") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.040932 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.044518 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.046079 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" (UID: "e6295be1-7d41-4b42-a8a6-7b18ff6bb04e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.106791 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.107017 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.107078 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfzsj\" (UniqueName: \"kubernetes.io/projected/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-kube-api-access-nfzsj\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.107135 4720 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.107222 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.107288 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.107351 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6295be1-7d41-4b42-a8a6-7b18ff6bb04e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.382768 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" event={"ID":"e6295be1-7d41-4b42-a8a6-7b18ff6bb04e","Type":"ContainerDied","Data":"33c6d7477489a3006e220fda76e5af9b70ccc94189f10a9c3b8f976537c39ecc"} Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.382813 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33c6d7477489a3006e220fda76e5af9b70ccc94189f10a9c3b8f976537c39ecc" Feb 02 09:39:19 crc kubenswrapper[4720]: I0202 09:39:19.382870 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w4mck" Feb 02 09:39:26 crc kubenswrapper[4720]: I0202 09:39:26.888675 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:39:26 crc kubenswrapper[4720]: E0202 09:39:26.889759 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:39:38 crc kubenswrapper[4720]: I0202 09:39:38.889861 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:39:38 crc kubenswrapper[4720]: E0202 09:39:38.890804 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:39:53 crc kubenswrapper[4720]: I0202 09:39:53.886808 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:39:53 crc kubenswrapper[4720]: E0202 09:39:53.887950 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.644054 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-st62v"] Feb 02 09:39:58 crc kubenswrapper[4720]: E0202 09:39:58.645325 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.645345 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.645631 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6295be1-7d41-4b42-a8a6-7b18ff6bb04e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.647383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.660511 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st62v"] Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.737815 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-utilities\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.738002 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9jr\" (UniqueName: \"kubernetes.io/projected/11653e84-4bfa-4991-abc8-967f0db4be3a-kube-api-access-kc9jr\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.738113 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-catalog-content\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.839476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-catalog-content\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.839575 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-utilities\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.839631 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9jr\" (UniqueName: \"kubernetes.io/projected/11653e84-4bfa-4991-abc8-967f0db4be3a-kube-api-access-kc9jr\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.840129 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-catalog-content\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.840232 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-utilities\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.861217 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9jr\" (UniqueName: \"kubernetes.io/projected/11653e84-4bfa-4991-abc8-967f0db4be3a-kube-api-access-kc9jr\") pod \"redhat-operators-st62v\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:58 crc kubenswrapper[4720]: I0202 09:39:58.973024 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:39:59 crc kubenswrapper[4720]: I0202 09:39:59.507634 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st62v"] Feb 02 09:39:59 crc kubenswrapper[4720]: I0202 09:39:59.784468 4720 generic.go:334] "Generic (PLEG): container finished" podID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerID="1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff" exitCode=0 Feb 02 09:39:59 crc kubenswrapper[4720]: I0202 09:39:59.784516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerDied","Data":"1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff"} Feb 02 09:39:59 crc kubenswrapper[4720]: I0202 09:39:59.784548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerStarted","Data":"5d9c78ed5ad90b31aa0f1385dce06a69cc44bc90b25e78c9fe862f7440fa5623"} Feb 02 09:39:59 crc kubenswrapper[4720]: I0202 09:39:59.788228 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:40:01 crc kubenswrapper[4720]: I0202 09:40:01.807235 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerStarted","Data":"3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b"} Feb 02 09:40:02 crc kubenswrapper[4720]: I0202 09:40:02.822802 4720 generic.go:334] "Generic (PLEG): container finished" podID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerID="3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b" exitCode=0 Feb 02 09:40:02 crc kubenswrapper[4720]: I0202 09:40:02.822893 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerDied","Data":"3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b"} Feb 02 09:40:04 crc kubenswrapper[4720]: I0202 09:40:04.845525 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerStarted","Data":"a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c"} Feb 02 09:40:04 crc kubenswrapper[4720]: I0202 09:40:04.880760 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-st62v" podStartSLOduration=2.3607409280000002 podStartE2EDuration="6.88074243s" podCreationTimestamp="2026-02-02 09:39:58 +0000 UTC" firstStartedPulling="2026-02-02 09:39:59.787798582 +0000 UTC m=+2633.643424148" lastFinishedPulling="2026-02-02 09:40:04.307800084 +0000 UTC m=+2638.163425650" observedRunningTime="2026-02-02 09:40:04.870340306 +0000 UTC m=+2638.725965872" watchObservedRunningTime="2026-02-02 09:40:04.88074243 +0000 UTC m=+2638.736367986" Feb 02 09:40:05 crc kubenswrapper[4720]: I0202 09:40:05.888556 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:40:05 crc kubenswrapper[4720]: E0202 09:40:05.888930 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:40:08 crc kubenswrapper[4720]: I0202 09:40:08.973290 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:40:08 crc kubenswrapper[4720]: I0202 09:40:08.973639 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:40:10 crc kubenswrapper[4720]: I0202 09:40:10.022461 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-st62v" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="registry-server" probeResult="failure" output=< Feb 02 09:40:10 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:40:10 crc kubenswrapper[4720]: > Feb 02 09:40:17 crc kubenswrapper[4720]: I0202 09:40:17.888084 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:40:17 crc kubenswrapper[4720]: E0202 09:40:17.889283 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:40:19 crc kubenswrapper[4720]: I0202 09:40:19.030582 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:40:19 crc kubenswrapper[4720]: I0202 09:40:19.099202 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:40:19 crc kubenswrapper[4720]: I0202 09:40:19.273380 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st62v"] Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.035113 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-st62v" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="registry-server" containerID="cri-o://a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c" gracePeriod=2 Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.607295 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.646853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc9jr\" (UniqueName: \"kubernetes.io/projected/11653e84-4bfa-4991-abc8-967f0db4be3a-kube-api-access-kc9jr\") pod \"11653e84-4bfa-4991-abc8-967f0db4be3a\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.647196 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-utilities\") pod \"11653e84-4bfa-4991-abc8-967f0db4be3a\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.647522 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-catalog-content\") pod \"11653e84-4bfa-4991-abc8-967f0db4be3a\" (UID: \"11653e84-4bfa-4991-abc8-967f0db4be3a\") " Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.648280 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-utilities" (OuterVolumeSpecName: "utilities") pod "11653e84-4bfa-4991-abc8-967f0db4be3a" (UID: "11653e84-4bfa-4991-abc8-967f0db4be3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.653909 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.672559 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11653e84-4bfa-4991-abc8-967f0db4be3a-kube-api-access-kc9jr" (OuterVolumeSpecName: "kube-api-access-kc9jr") pod "11653e84-4bfa-4991-abc8-967f0db4be3a" (UID: "11653e84-4bfa-4991-abc8-967f0db4be3a"). InnerVolumeSpecName "kube-api-access-kc9jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.756184 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc9jr\" (UniqueName: \"kubernetes.io/projected/11653e84-4bfa-4991-abc8-967f0db4be3a-kube-api-access-kc9jr\") on node \"crc\" DevicePath \"\"" Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.762840 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11653e84-4bfa-4991-abc8-967f0db4be3a" (UID: "11653e84-4bfa-4991-abc8-967f0db4be3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:40:21 crc kubenswrapper[4720]: I0202 09:40:21.857918 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11653e84-4bfa-4991-abc8-967f0db4be3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.049642 4720 generic.go:334] "Generic (PLEG): container finished" podID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerID="a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c" exitCode=0 Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.049691 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerDied","Data":"a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c"} Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.049705 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st62v" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.049725 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st62v" event={"ID":"11653e84-4bfa-4991-abc8-967f0db4be3a","Type":"ContainerDied","Data":"5d9c78ed5ad90b31aa0f1385dce06a69cc44bc90b25e78c9fe862f7440fa5623"} Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.049754 4720 scope.go:117] "RemoveContainer" containerID="a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.112492 4720 scope.go:117] "RemoveContainer" containerID="3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.119911 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st62v"] Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.128451 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-st62v"] Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.138255 4720 scope.go:117] "RemoveContainer" containerID="1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.176962 4720 scope.go:117] "RemoveContainer" containerID="a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c" Feb 02 09:40:22 crc kubenswrapper[4720]: E0202 09:40:22.178236 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c\": container with ID starting with a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c not found: ID does not exist" containerID="a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.178274 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c"} err="failed to get container status \"a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c\": rpc error: code = NotFound desc = could not find container \"a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c\": container with ID starting with a74be4131dd0c5ad2e89da5fa9a668bd59c52358b2625bc0b2c0579c7b2d958c not found: ID does not exist" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.178298 4720 scope.go:117] "RemoveContainer" containerID="3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b" Feb 02 09:40:22 crc kubenswrapper[4720]: E0202 09:40:22.178676 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b\": container with ID starting with 3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b not found: ID does not exist" containerID="3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.178705 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b"} err="failed to get container status \"3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b\": rpc error: code = NotFound desc = could not find container \"3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b\": container with ID starting with 3c747b29303a136c55adc63692241cc4afb5e4d4f24dfda174a997440484bc9b not found: ID does not exist" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.178721 4720 scope.go:117] "RemoveContainer" containerID="1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff" Feb 02 09:40:22 crc kubenswrapper[4720]: E0202 09:40:22.179006 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff\": container with ID starting with 1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff not found: ID does not exist" containerID="1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.179030 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff"} err="failed to get container status \"1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff\": rpc error: code = NotFound desc = could not find container \"1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff\": container with ID starting with 1e4c597322571076bb50cf61d79850f8d6f6d323ccf7dbc623f5d1f78fadd7ff not found: ID does not exist" Feb 02 09:40:22 crc kubenswrapper[4720]: I0202 09:40:22.909513 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" path="/var/lib/kubelet/pods/11653e84-4bfa-4991-abc8-967f0db4be3a/volumes" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.613891 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 09:40:28 crc kubenswrapper[4720]: E0202 09:40:28.614794 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="extract-content" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.614808 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="extract-content" Feb 02 09:40:28 crc kubenswrapper[4720]: E0202 09:40:28.614819 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="registry-server" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.614824 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="registry-server" Feb 02 09:40:28 crc kubenswrapper[4720]: E0202 09:40:28.614850 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="extract-utilities" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.614856 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="extract-utilities" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.615087 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="11653e84-4bfa-4991-abc8-967f0db4be3a" containerName="registry-server" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.615732 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.617867 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.617985 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.618077 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.627662 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707579 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707627 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707658 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707707 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qss8k\" (UniqueName: \"kubernetes.io/projected/daa51d24-e496-4a32-88c3-89ef00451e74-kube-api-access-qss8k\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707744 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-config-data\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.707776 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.709394 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.709479 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813737 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813517 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813890 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.813989 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qss8k\" (UniqueName: \"kubernetes.io/projected/daa51d24-e496-4a32-88c3-89ef00451e74-kube-api-access-qss8k\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.814046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-config-data\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.814073 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.814847 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.819051 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.819083 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.820311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.820507 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.820951 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-config-data\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.824666 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.837290 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qss8k\" (UniqueName: \"kubernetes.io/projected/daa51d24-e496-4a32-88c3-89ef00451e74-kube-api-access-qss8k\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.843909 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " pod="openstack/tempest-tests-tempest" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.887707 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:40:28 crc kubenswrapper[4720]: E0202 09:40:28.888269 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:40:28 crc kubenswrapper[4720]: I0202 09:40:28.939477 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 09:40:29 crc kubenswrapper[4720]: I0202 09:40:29.444847 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 09:40:30 crc kubenswrapper[4720]: I0202 09:40:30.155531 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"daa51d24-e496-4a32-88c3-89ef00451e74","Type":"ContainerStarted","Data":"fa86e6dc66697660768d9ffebf013d2d501ca7564e9d4a159d0eadb43f40fbd0"} Feb 02 09:40:41 crc kubenswrapper[4720]: I0202 09:40:41.887851 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:40:41 crc kubenswrapper[4720]: E0202 09:40:41.888768 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:40:54 crc kubenswrapper[4720]: I0202 09:40:54.887736 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:40:54 crc kubenswrapper[4720]: E0202 09:40:54.888910 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:40:56 crc kubenswrapper[4720]: E0202 09:40:56.594337 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 09:40:56 crc kubenswrapper[4720]: E0202 09:40:56.595079 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qss8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(daa51d24-e496-4a32-88c3-89ef00451e74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 09:40:56 crc kubenswrapper[4720]: E0202 09:40:56.596363 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="daa51d24-e496-4a32-88c3-89ef00451e74" Feb 02 09:40:57 crc kubenswrapper[4720]: E0202 09:40:57.432117 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="daa51d24-e496-4a32-88c3-89ef00451e74" Feb 02 09:41:07 crc kubenswrapper[4720]: I0202 09:41:07.887593 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:41:07 crc kubenswrapper[4720]: E0202 09:41:07.888514 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:41:12 crc kubenswrapper[4720]: I0202 09:41:12.439097 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 09:41:14 crc kubenswrapper[4720]: I0202 09:41:14.623447 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"daa51d24-e496-4a32-88c3-89ef00451e74","Type":"ContainerStarted","Data":"b0e87b0b9bb04bc128260ef6b26cf1ce671d7dde0622aff2b7082802ddaabac3"} Feb 02 09:41:14 crc kubenswrapper[4720]: I0202 09:41:14.658746 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.669759114 podStartE2EDuration="47.658711132s" podCreationTimestamp="2026-02-02 09:40:27 +0000 UTC" firstStartedPulling="2026-02-02 09:40:29.447005804 +0000 UTC m=+2663.302631380" lastFinishedPulling="2026-02-02 09:41:12.435957812 +0000 UTC m=+2706.291583398" observedRunningTime="2026-02-02 09:41:14.64263061 +0000 UTC m=+2708.498256206" watchObservedRunningTime="2026-02-02 09:41:14.658711132 +0000 UTC m=+2708.514336728" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.768213 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dsbl7"] Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.772258 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.787161 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsbl7"] Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.866786 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-utilities\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.867516 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd9md\" (UniqueName: \"kubernetes.io/projected/9b32ef1f-850a-43e8-88b2-d63904e32d5d-kube-api-access-fd9md\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.867571 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-catalog-content\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.887557 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.969694 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd9md\" (UniqueName: \"kubernetes.io/projected/9b32ef1f-850a-43e8-88b2-d63904e32d5d-kube-api-access-fd9md\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.970085 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-catalog-content\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.970143 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-utilities\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.973387 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-utilities\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.973717 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-catalog-content\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:21 crc kubenswrapper[4720]: I0202 09:41:21.998012 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd9md\" (UniqueName: \"kubernetes.io/projected/9b32ef1f-850a-43e8-88b2-d63904e32d5d-kube-api-access-fd9md\") pod \"community-operators-dsbl7\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:22 crc kubenswrapper[4720]: I0202 09:41:22.127425 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:22 crc kubenswrapper[4720]: I0202 09:41:22.685347 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsbl7"] Feb 02 09:41:22 crc kubenswrapper[4720]: I0202 09:41:22.705791 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerStarted","Data":"02c53dd9bb8a37709b4e3825cef2dc38af38b4c18690d8edb1de3195caec635b"} Feb 02 09:41:22 crc kubenswrapper[4720]: I0202 09:41:22.707442 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"7c075bb8e05e4721c775d31eeddab222f5329da0e0cae8bd26069c66a156a420"} Feb 02 09:41:23 crc kubenswrapper[4720]: I0202 09:41:23.723771 4720 generic.go:334] "Generic (PLEG): container finished" podID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerID="736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4" exitCode=0 Feb 02 09:41:23 crc kubenswrapper[4720]: I0202 09:41:23.723831 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerDied","Data":"736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4"} Feb 02 09:41:25 crc kubenswrapper[4720]: I0202 09:41:25.746057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerStarted","Data":"c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219"} Feb 02 09:41:26 crc kubenswrapper[4720]: I0202 09:41:26.755335 4720 generic.go:334] "Generic (PLEG): container finished" podID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerID="c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219" exitCode=0 Feb 02 09:41:26 crc kubenswrapper[4720]: I0202 09:41:26.755442 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerDied","Data":"c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219"} Feb 02 09:41:27 crc kubenswrapper[4720]: I0202 09:41:27.770740 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerStarted","Data":"96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448"} Feb 02 09:41:27 crc kubenswrapper[4720]: I0202 09:41:27.795694 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dsbl7" podStartSLOduration=3.34798084 podStartE2EDuration="6.795678086s" podCreationTimestamp="2026-02-02 09:41:21 +0000 UTC" firstStartedPulling="2026-02-02 09:41:23.726381224 +0000 UTC m=+2717.582006790" lastFinishedPulling="2026-02-02 09:41:27.17407848 +0000 UTC m=+2721.029704036" observedRunningTime="2026-02-02 09:41:27.792064049 +0000 UTC m=+2721.647689605" watchObservedRunningTime="2026-02-02 09:41:27.795678086 +0000 UTC m=+2721.651303632" Feb 02 09:41:32 crc kubenswrapper[4720]: I0202 09:41:32.128849 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:32 crc kubenswrapper[4720]: I0202 09:41:32.129490 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:32 crc kubenswrapper[4720]: I0202 09:41:32.181364 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:32 crc kubenswrapper[4720]: I0202 09:41:32.909257 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:32 crc kubenswrapper[4720]: I0202 09:41:32.970222 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dsbl7"] Feb 02 09:41:34 crc kubenswrapper[4720]: I0202 09:41:34.841821 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dsbl7" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="registry-server" containerID="cri-o://96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448" gracePeriod=2 Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.341488 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.387792 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd9md\" (UniqueName: \"kubernetes.io/projected/9b32ef1f-850a-43e8-88b2-d63904e32d5d-kube-api-access-fd9md\") pod \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.387838 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-catalog-content\") pod \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.387982 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-utilities\") pod \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\" (UID: \"9b32ef1f-850a-43e8-88b2-d63904e32d5d\") " Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.388854 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-utilities" (OuterVolumeSpecName: "utilities") pod "9b32ef1f-850a-43e8-88b2-d63904e32d5d" (UID: "9b32ef1f-850a-43e8-88b2-d63904e32d5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.395782 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b32ef1f-850a-43e8-88b2-d63904e32d5d-kube-api-access-fd9md" (OuterVolumeSpecName: "kube-api-access-fd9md") pod "9b32ef1f-850a-43e8-88b2-d63904e32d5d" (UID: "9b32ef1f-850a-43e8-88b2-d63904e32d5d"). InnerVolumeSpecName "kube-api-access-fd9md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.451680 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b32ef1f-850a-43e8-88b2-d63904e32d5d" (UID: "9b32ef1f-850a-43e8-88b2-d63904e32d5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.491545 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd9md\" (UniqueName: \"kubernetes.io/projected/9b32ef1f-850a-43e8-88b2-d63904e32d5d-kube-api-access-fd9md\") on node \"crc\" DevicePath \"\"" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.491579 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.491589 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b32ef1f-850a-43e8-88b2-d63904e32d5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.863463 4720 generic.go:334] "Generic (PLEG): container finished" podID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerID="96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448" exitCode=0 Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.864013 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerDied","Data":"96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448"} Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.864070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsbl7" event={"ID":"9b32ef1f-850a-43e8-88b2-d63904e32d5d","Type":"ContainerDied","Data":"02c53dd9bb8a37709b4e3825cef2dc38af38b4c18690d8edb1de3195caec635b"} Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.864113 4720 scope.go:117] "RemoveContainer" containerID="96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.864336 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsbl7" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.947340 4720 scope.go:117] "RemoveContainer" containerID="c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219" Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.959965 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dsbl7"] Feb 02 09:41:35 crc kubenswrapper[4720]: I0202 09:41:35.980691 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dsbl7"] Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.025642 4720 scope.go:117] "RemoveContainer" containerID="736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.064025 4720 scope.go:117] "RemoveContainer" containerID="96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448" Feb 02 09:41:36 crc kubenswrapper[4720]: E0202 09:41:36.064554 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448\": container with ID starting with 96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448 not found: ID does not exist" containerID="96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.064596 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448"} err="failed to get container status \"96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448\": rpc error: code = NotFound desc = could not find container \"96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448\": container with ID starting with 96c4bf0ddcb68442729190a7658a042adc3c13a4a840746f6f133878c45c6448 not found: ID does not exist" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.064626 4720 scope.go:117] "RemoveContainer" containerID="c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219" Feb 02 09:41:36 crc kubenswrapper[4720]: E0202 09:41:36.065412 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219\": container with ID starting with c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219 not found: ID does not exist" containerID="c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.065465 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219"} err="failed to get container status \"c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219\": rpc error: code = NotFound desc = could not find container \"c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219\": container with ID starting with c89a594c2f67001c25331e6d2fe6eed95f167f551c030c3c995ceced2665d219 not found: ID does not exist" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.065484 4720 scope.go:117] "RemoveContainer" containerID="736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4" Feb 02 09:41:36 crc kubenswrapper[4720]: E0202 09:41:36.065725 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4\": container with ID starting with 736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4 not found: ID does not exist" containerID="736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.065746 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4"} err="failed to get container status \"736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4\": rpc error: code = NotFound desc = could not find container \"736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4\": container with ID starting with 736c24846f30cd47f2119f35afae920eac7ad4a7a27d3a761b2928cc2016ebc4 not found: ID does not exist" Feb 02 09:41:36 crc kubenswrapper[4720]: I0202 09:41:36.911729 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" path="/var/lib/kubelet/pods/9b32ef1f-850a-43e8-88b2-d63904e32d5d/volumes" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.619396 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9d864"] Feb 02 09:43:23 crc kubenswrapper[4720]: E0202 09:43:23.620223 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="extract-utilities" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.620235 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="extract-utilities" Feb 02 09:43:23 crc kubenswrapper[4720]: E0202 09:43:23.620248 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="extract-content" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.620255 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="extract-content" Feb 02 09:43:23 crc kubenswrapper[4720]: E0202 09:43:23.620265 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="registry-server" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.620270 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="registry-server" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.620472 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b32ef1f-850a-43e8-88b2-d63904e32d5d" containerName="registry-server" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.621669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.639359 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d864"] Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.764344 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-utilities\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.764400 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvtn\" (UniqueName: \"kubernetes.io/projected/d00d3530-8f58-4346-ad8b-a5d81ded6178-kube-api-access-tbvtn\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.764495 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-catalog-content\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.867123 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-utilities\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.867186 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbvtn\" (UniqueName: \"kubernetes.io/projected/d00d3530-8f58-4346-ad8b-a5d81ded6178-kube-api-access-tbvtn\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.867263 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-catalog-content\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.867821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-utilities\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.867832 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-catalog-content\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.886716 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbvtn\" (UniqueName: \"kubernetes.io/projected/d00d3530-8f58-4346-ad8b-a5d81ded6178-kube-api-access-tbvtn\") pod \"redhat-marketplace-9d864\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:23 crc kubenswrapper[4720]: I0202 09:43:23.946027 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:24 crc kubenswrapper[4720]: I0202 09:43:24.598158 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d864"] Feb 02 09:43:24 crc kubenswrapper[4720]: I0202 09:43:24.909651 4720 generic.go:334] "Generic (PLEG): container finished" podID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerID="a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915" exitCode=0 Feb 02 09:43:24 crc kubenswrapper[4720]: I0202 09:43:24.909837 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerDied","Data":"a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915"} Feb 02 09:43:24 crc kubenswrapper[4720]: I0202 09:43:24.909994 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerStarted","Data":"a52b03c3ab464e2c84f0dce678ad212d4204e0354c2d9210d4efedb850d5481a"} Feb 02 09:43:25 crc kubenswrapper[4720]: I0202 09:43:25.919281 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerStarted","Data":"dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5"} Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.626902 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sk67p"] Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.629342 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.640051 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sk67p"] Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.728622 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-catalog-content\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.728724 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-utilities\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.728777 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fnpr\" (UniqueName: \"kubernetes.io/projected/c58ce452-4b20-423d-9ccf-54c8456b6cea-kube-api-access-6fnpr\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.830544 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fnpr\" (UniqueName: \"kubernetes.io/projected/c58ce452-4b20-423d-9ccf-54c8456b6cea-kube-api-access-6fnpr\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.830653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-catalog-content\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.830762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-utilities\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.831199 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-catalog-content\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.831288 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-utilities\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.850335 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fnpr\" (UniqueName: \"kubernetes.io/projected/c58ce452-4b20-423d-9ccf-54c8456b6cea-kube-api-access-6fnpr\") pod \"certified-operators-sk67p\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.930207 4720 generic.go:334] "Generic (PLEG): container finished" podID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerID="dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5" exitCode=0 Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.930361 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerDied","Data":"dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5"} Feb 02 09:43:26 crc kubenswrapper[4720]: I0202 09:43:26.966466 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:27 crc kubenswrapper[4720]: I0202 09:43:27.547635 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sk67p"] Feb 02 09:43:27 crc kubenswrapper[4720]: I0202 09:43:27.944582 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerStarted","Data":"496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5"} Feb 02 09:43:27 crc kubenswrapper[4720]: I0202 09:43:27.947820 4720 generic.go:334] "Generic (PLEG): container finished" podID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerID="634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d" exitCode=0 Feb 02 09:43:27 crc kubenswrapper[4720]: I0202 09:43:27.947859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerDied","Data":"634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d"} Feb 02 09:43:27 crc kubenswrapper[4720]: I0202 09:43:27.947904 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerStarted","Data":"1254c3d52ba2f1accc729bdd22accc2e2594d184e5ceccaf9b1ff8b8784dedf1"} Feb 02 09:43:27 crc kubenswrapper[4720]: I0202 09:43:27.975565 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9d864" podStartSLOduration=2.538996583 podStartE2EDuration="4.975544942s" podCreationTimestamp="2026-02-02 09:43:23 +0000 UTC" firstStartedPulling="2026-02-02 09:43:24.911857892 +0000 UTC m=+2838.767483448" lastFinishedPulling="2026-02-02 09:43:27.348406251 +0000 UTC m=+2841.204031807" observedRunningTime="2026-02-02 09:43:27.968541891 +0000 UTC m=+2841.824167457" watchObservedRunningTime="2026-02-02 09:43:27.975544942 +0000 UTC m=+2841.831170498" Feb 02 09:43:28 crc kubenswrapper[4720]: I0202 09:43:28.965160 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerStarted","Data":"cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a"} Feb 02 09:43:30 crc kubenswrapper[4720]: I0202 09:43:30.996537 4720 generic.go:334] "Generic (PLEG): container finished" podID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerID="cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a" exitCode=0 Feb 02 09:43:30 crc kubenswrapper[4720]: I0202 09:43:30.996616 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerDied","Data":"cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a"} Feb 02 09:43:32 crc kubenswrapper[4720]: I0202 09:43:32.007142 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerStarted","Data":"74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5"} Feb 02 09:43:32 crc kubenswrapper[4720]: I0202 09:43:32.031473 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sk67p" podStartSLOduration=2.587252926 podStartE2EDuration="6.031455967s" podCreationTimestamp="2026-02-02 09:43:26 +0000 UTC" firstStartedPulling="2026-02-02 09:43:27.951444134 +0000 UTC m=+2841.807069700" lastFinishedPulling="2026-02-02 09:43:31.395647185 +0000 UTC m=+2845.251272741" observedRunningTime="2026-02-02 09:43:32.026252461 +0000 UTC m=+2845.881878017" watchObservedRunningTime="2026-02-02 09:43:32.031455967 +0000 UTC m=+2845.887081523" Feb 02 09:43:33 crc kubenswrapper[4720]: I0202 09:43:33.946537 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:33 crc kubenswrapper[4720]: I0202 09:43:33.947741 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:34 crc kubenswrapper[4720]: I0202 09:43:34.994827 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9d864" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="registry-server" probeResult="failure" output=< Feb 02 09:43:34 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:43:34 crc kubenswrapper[4720]: > Feb 02 09:43:36 crc kubenswrapper[4720]: I0202 09:43:36.966724 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:36 crc kubenswrapper[4720]: I0202 09:43:36.967018 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:37 crc kubenswrapper[4720]: I0202 09:43:37.014084 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:37 crc kubenswrapper[4720]: I0202 09:43:37.124246 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:37 crc kubenswrapper[4720]: I0202 09:43:37.252903 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sk67p"] Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.090531 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sk67p" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="registry-server" containerID="cri-o://74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5" gracePeriod=2 Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.787135 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.912724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-utilities\") pod \"c58ce452-4b20-423d-9ccf-54c8456b6cea\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.913027 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-catalog-content\") pod \"c58ce452-4b20-423d-9ccf-54c8456b6cea\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.913117 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fnpr\" (UniqueName: \"kubernetes.io/projected/c58ce452-4b20-423d-9ccf-54c8456b6cea-kube-api-access-6fnpr\") pod \"c58ce452-4b20-423d-9ccf-54c8456b6cea\" (UID: \"c58ce452-4b20-423d-9ccf-54c8456b6cea\") " Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.913825 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-utilities" (OuterVolumeSpecName: "utilities") pod "c58ce452-4b20-423d-9ccf-54c8456b6cea" (UID: "c58ce452-4b20-423d-9ccf-54c8456b6cea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.923867 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58ce452-4b20-423d-9ccf-54c8456b6cea-kube-api-access-6fnpr" (OuterVolumeSpecName: "kube-api-access-6fnpr") pod "c58ce452-4b20-423d-9ccf-54c8456b6cea" (UID: "c58ce452-4b20-423d-9ccf-54c8456b6cea"). InnerVolumeSpecName "kube-api-access-6fnpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:43:39 crc kubenswrapper[4720]: I0202 09:43:39.962240 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c58ce452-4b20-423d-9ccf-54c8456b6cea" (UID: "c58ce452-4b20-423d-9ccf-54c8456b6cea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.015155 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.015407 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fnpr\" (UniqueName: \"kubernetes.io/projected/c58ce452-4b20-423d-9ccf-54c8456b6cea-kube-api-access-6fnpr\") on node \"crc\" DevicePath \"\"" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.015418 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c58ce452-4b20-423d-9ccf-54c8456b6cea-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.102458 4720 generic.go:334] "Generic (PLEG): container finished" podID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerID="74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5" exitCode=0 Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.102512 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerDied","Data":"74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5"} Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.102546 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sk67p" event={"ID":"c58ce452-4b20-423d-9ccf-54c8456b6cea","Type":"ContainerDied","Data":"1254c3d52ba2f1accc729bdd22accc2e2594d184e5ceccaf9b1ff8b8784dedf1"} Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.102597 4720 scope.go:117] "RemoveContainer" containerID="74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.102776 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sk67p" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.155255 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sk67p"] Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.157533 4720 scope.go:117] "RemoveContainer" containerID="cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.166686 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sk67p"] Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.180211 4720 scope.go:117] "RemoveContainer" containerID="634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.224793 4720 scope.go:117] "RemoveContainer" containerID="74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5" Feb 02 09:43:40 crc kubenswrapper[4720]: E0202 09:43:40.225178 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5\": container with ID starting with 74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5 not found: ID does not exist" containerID="74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.225228 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5"} err="failed to get container status \"74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5\": rpc error: code = NotFound desc = could not find container \"74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5\": container with ID starting with 74bb7e96a3172c5bb42c9e9e695fdf8384b70fe550be2019c22b98178e39a8b5 not found: ID does not exist" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.225249 4720 scope.go:117] "RemoveContainer" containerID="cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a" Feb 02 09:43:40 crc kubenswrapper[4720]: E0202 09:43:40.225471 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a\": container with ID starting with cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a not found: ID does not exist" containerID="cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.225490 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a"} err="failed to get container status \"cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a\": rpc error: code = NotFound desc = could not find container \"cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a\": container with ID starting with cc5c06b63ffa21a120116b3af0e241a21983086045ab5020855a4772e58ca24a not found: ID does not exist" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.225503 4720 scope.go:117] "RemoveContainer" containerID="634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d" Feb 02 09:43:40 crc kubenswrapper[4720]: E0202 09:43:40.225729 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d\": container with ID starting with 634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d not found: ID does not exist" containerID="634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.225747 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d"} err="failed to get container status \"634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d\": rpc error: code = NotFound desc = could not find container \"634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d\": container with ID starting with 634fa351b7c9846951aa9e97925203dff5d7921acfcca0edb7f17ba24289189d not found: ID does not exist" Feb 02 09:43:40 crc kubenswrapper[4720]: I0202 09:43:40.897038 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" path="/var/lib/kubelet/pods/c58ce452-4b20-423d-9ccf-54c8456b6cea/volumes" Feb 02 09:43:44 crc kubenswrapper[4720]: I0202 09:43:43.999386 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:44 crc kubenswrapper[4720]: I0202 09:43:44.057221 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:44 crc kubenswrapper[4720]: I0202 09:43:44.247439 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d864"] Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.146161 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9d864" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="registry-server" containerID="cri-o://496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5" gracePeriod=2 Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.826140 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.939382 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-utilities\") pod \"d00d3530-8f58-4346-ad8b-a5d81ded6178\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.939429 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-catalog-content\") pod \"d00d3530-8f58-4346-ad8b-a5d81ded6178\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.939570 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbvtn\" (UniqueName: \"kubernetes.io/projected/d00d3530-8f58-4346-ad8b-a5d81ded6178-kube-api-access-tbvtn\") pod \"d00d3530-8f58-4346-ad8b-a5d81ded6178\" (UID: \"d00d3530-8f58-4346-ad8b-a5d81ded6178\") " Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.940259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-utilities" (OuterVolumeSpecName: "utilities") pod "d00d3530-8f58-4346-ad8b-a5d81ded6178" (UID: "d00d3530-8f58-4346-ad8b-a5d81ded6178"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.948163 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00d3530-8f58-4346-ad8b-a5d81ded6178-kube-api-access-tbvtn" (OuterVolumeSpecName: "kube-api-access-tbvtn") pod "d00d3530-8f58-4346-ad8b-a5d81ded6178" (UID: "d00d3530-8f58-4346-ad8b-a5d81ded6178"). InnerVolumeSpecName "kube-api-access-tbvtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:43:45 crc kubenswrapper[4720]: I0202 09:43:45.980645 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d00d3530-8f58-4346-ad8b-a5d81ded6178" (UID: "d00d3530-8f58-4346-ad8b-a5d81ded6178"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.042551 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.042592 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d00d3530-8f58-4346-ad8b-a5d81ded6178-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.042605 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbvtn\" (UniqueName: \"kubernetes.io/projected/d00d3530-8f58-4346-ad8b-a5d81ded6178-kube-api-access-tbvtn\") on node \"crc\" DevicePath \"\"" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.157224 4720 generic.go:334] "Generic (PLEG): container finished" podID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerID="496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5" exitCode=0 Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.157286 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d864" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.157301 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerDied","Data":"496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5"} Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.157693 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d864" event={"ID":"d00d3530-8f58-4346-ad8b-a5d81ded6178","Type":"ContainerDied","Data":"a52b03c3ab464e2c84f0dce678ad212d4204e0354c2d9210d4efedb850d5481a"} Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.157724 4720 scope.go:117] "RemoveContainer" containerID="496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.179851 4720 scope.go:117] "RemoveContainer" containerID="dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.197025 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d864"] Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.203040 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d864"] Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.212086 4720 scope.go:117] "RemoveContainer" containerID="a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.254713 4720 scope.go:117] "RemoveContainer" containerID="496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5" Feb 02 09:43:46 crc kubenswrapper[4720]: E0202 09:43:46.255125 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5\": container with ID starting with 496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5 not found: ID does not exist" containerID="496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.255166 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5"} err="failed to get container status \"496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5\": rpc error: code = NotFound desc = could not find container \"496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5\": container with ID starting with 496ac491608ccb4f0e635a332e25cba2026b68c960374143fd1fa9ea79db9de5 not found: ID does not exist" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.255193 4720 scope.go:117] "RemoveContainer" containerID="dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5" Feb 02 09:43:46 crc kubenswrapper[4720]: E0202 09:43:46.255593 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5\": container with ID starting with dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5 not found: ID does not exist" containerID="dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.255632 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5"} err="failed to get container status \"dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5\": rpc error: code = NotFound desc = could not find container \"dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5\": container with ID starting with dbb00c06cc3bdaeed8a9269c41dba0d153562744954aaa6ecfef2d796715daa5 not found: ID does not exist" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.255655 4720 scope.go:117] "RemoveContainer" containerID="a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915" Feb 02 09:43:46 crc kubenswrapper[4720]: E0202 09:43:46.256126 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915\": container with ID starting with a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915 not found: ID does not exist" containerID="a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.256152 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915"} err="failed to get container status \"a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915\": rpc error: code = NotFound desc = could not find container \"a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915\": container with ID starting with a8f5b7c2735f3b015ffc0db5c38daa116e66de4791f5fa01b1d8d0efb7a1b915 not found: ID does not exist" Feb 02 09:43:46 crc kubenswrapper[4720]: I0202 09:43:46.904748 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" path="/var/lib/kubelet/pods/d00d3530-8f58-4346-ad8b-a5d81ded6178/volumes" Feb 02 09:43:47 crc kubenswrapper[4720]: I0202 09:43:47.902521 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:43:47 crc kubenswrapper[4720]: I0202 09:43:47.903125 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:44:17 crc kubenswrapper[4720]: I0202 09:44:17.902043 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:44:17 crc kubenswrapper[4720]: I0202 09:44:17.902570 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:44:47 crc kubenswrapper[4720]: I0202 09:44:47.902093 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:44:47 crc kubenswrapper[4720]: I0202 09:44:47.902714 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:44:47 crc kubenswrapper[4720]: I0202 09:44:47.902759 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:44:47 crc kubenswrapper[4720]: I0202 09:44:47.903509 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c075bb8e05e4721c775d31eeddab222f5329da0e0cae8bd26069c66a156a420"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:44:47 crc kubenswrapper[4720]: I0202 09:44:47.903565 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://7c075bb8e05e4721c775d31eeddab222f5329da0e0cae8bd26069c66a156a420" gracePeriod=600 Feb 02 09:44:48 crc kubenswrapper[4720]: I0202 09:44:48.760451 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="7c075bb8e05e4721c775d31eeddab222f5329da0e0cae8bd26069c66a156a420" exitCode=0 Feb 02 09:44:48 crc kubenswrapper[4720]: I0202 09:44:48.760522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"7c075bb8e05e4721c775d31eeddab222f5329da0e0cae8bd26069c66a156a420"} Feb 02 09:44:48 crc kubenswrapper[4720]: I0202 09:44:48.761102 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f"} Feb 02 09:44:48 crc kubenswrapper[4720]: I0202 09:44:48.761156 4720 scope.go:117] "RemoveContainer" containerID="53b699755756c70afe930f0bebb899f054fd4208115d548575d1cf6ce73e4baf" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.147202 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr"] Feb 02 09:45:00 crc kubenswrapper[4720]: E0202 09:45:00.148146 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="extract-content" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148162 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="extract-content" Feb 02 09:45:00 crc kubenswrapper[4720]: E0202 09:45:00.148181 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="extract-utilities" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148189 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="extract-utilities" Feb 02 09:45:00 crc kubenswrapper[4720]: E0202 09:45:00.148225 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="extract-content" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148233 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="extract-content" Feb 02 09:45:00 crc kubenswrapper[4720]: E0202 09:45:00.148252 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="extract-utilities" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148259 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="extract-utilities" Feb 02 09:45:00 crc kubenswrapper[4720]: E0202 09:45:00.148270 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="registry-server" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148279 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="registry-server" Feb 02 09:45:00 crc kubenswrapper[4720]: E0202 09:45:00.148293 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="registry-server" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148301 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="registry-server" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148489 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58ce452-4b20-423d-9ccf-54c8456b6cea" containerName="registry-server" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.148502 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00d3530-8f58-4346-ad8b-a5d81ded6178" containerName="registry-server" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.149423 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.153094 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.162700 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.171193 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr"] Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.341404 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9717b267-32e0-41d9-9ae9-c713df483953-config-volume\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.341837 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9717b267-32e0-41d9-9ae9-c713df483953-secret-volume\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.341863 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72pl\" (UniqueName: \"kubernetes.io/projected/9717b267-32e0-41d9-9ae9-c713df483953-kube-api-access-k72pl\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.443847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9717b267-32e0-41d9-9ae9-c713df483953-config-volume\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.444367 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9717b267-32e0-41d9-9ae9-c713df483953-secret-volume\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.444500 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k72pl\" (UniqueName: \"kubernetes.io/projected/9717b267-32e0-41d9-9ae9-c713df483953-kube-api-access-k72pl\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.445862 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9717b267-32e0-41d9-9ae9-c713df483953-config-volume\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.451827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9717b267-32e0-41d9-9ae9-c713df483953-secret-volume\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.464779 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72pl\" (UniqueName: \"kubernetes.io/projected/9717b267-32e0-41d9-9ae9-c713df483953-kube-api-access-k72pl\") pod \"collect-profiles-29500425-f6mrr\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:00 crc kubenswrapper[4720]: I0202 09:45:00.471451 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:01 crc kubenswrapper[4720]: I0202 09:45:01.014177 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr"] Feb 02 09:45:01 crc kubenswrapper[4720]: W0202 09:45:01.019038 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9717b267_32e0_41d9_9ae9_c713df483953.slice/crio-6161cb9d319bf48832b70b83ca0cb87b39d6e8de98f26cb365ef18c08e329d31 WatchSource:0}: Error finding container 6161cb9d319bf48832b70b83ca0cb87b39d6e8de98f26cb365ef18c08e329d31: Status 404 returned error can't find the container with id 6161cb9d319bf48832b70b83ca0cb87b39d6e8de98f26cb365ef18c08e329d31 Feb 02 09:45:01 crc kubenswrapper[4720]: I0202 09:45:01.938664 4720 generic.go:334] "Generic (PLEG): container finished" podID="9717b267-32e0-41d9-9ae9-c713df483953" containerID="0668acd20f22a57959f105323f6131e1b760947350f93daab49c6284d7d99446" exitCode=0 Feb 02 09:45:01 crc kubenswrapper[4720]: I0202 09:45:01.938714 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" event={"ID":"9717b267-32e0-41d9-9ae9-c713df483953","Type":"ContainerDied","Data":"0668acd20f22a57959f105323f6131e1b760947350f93daab49c6284d7d99446"} Feb 02 09:45:01 crc kubenswrapper[4720]: I0202 09:45:01.939264 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" event={"ID":"9717b267-32e0-41d9-9ae9-c713df483953","Type":"ContainerStarted","Data":"6161cb9d319bf48832b70b83ca0cb87b39d6e8de98f26cb365ef18c08e329d31"} Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.599235 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.628990 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9717b267-32e0-41d9-9ae9-c713df483953-config-volume\") pod \"9717b267-32e0-41d9-9ae9-c713df483953\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.629081 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9717b267-32e0-41d9-9ae9-c713df483953-secret-volume\") pod \"9717b267-32e0-41d9-9ae9-c713df483953\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.629197 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k72pl\" (UniqueName: \"kubernetes.io/projected/9717b267-32e0-41d9-9ae9-c713df483953-kube-api-access-k72pl\") pod \"9717b267-32e0-41d9-9ae9-c713df483953\" (UID: \"9717b267-32e0-41d9-9ae9-c713df483953\") " Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.630653 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9717b267-32e0-41d9-9ae9-c713df483953-config-volume" (OuterVolumeSpecName: "config-volume") pod "9717b267-32e0-41d9-9ae9-c713df483953" (UID: "9717b267-32e0-41d9-9ae9-c713df483953"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.637024 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9717b267-32e0-41d9-9ae9-c713df483953-kube-api-access-k72pl" (OuterVolumeSpecName: "kube-api-access-k72pl") pod "9717b267-32e0-41d9-9ae9-c713df483953" (UID: "9717b267-32e0-41d9-9ae9-c713df483953"). InnerVolumeSpecName "kube-api-access-k72pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.637127 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9717b267-32e0-41d9-9ae9-c713df483953-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9717b267-32e0-41d9-9ae9-c713df483953" (UID: "9717b267-32e0-41d9-9ae9-c713df483953"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.731635 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9717b267-32e0-41d9-9ae9-c713df483953-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.731656 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9717b267-32e0-41d9-9ae9-c713df483953-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.731667 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k72pl\" (UniqueName: \"kubernetes.io/projected/9717b267-32e0-41d9-9ae9-c713df483953-kube-api-access-k72pl\") on node \"crc\" DevicePath \"\"" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.958843 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" event={"ID":"9717b267-32e0-41d9-9ae9-c713df483953","Type":"ContainerDied","Data":"6161cb9d319bf48832b70b83ca0cb87b39d6e8de98f26cb365ef18c08e329d31"} Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.958894 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6161cb9d319bf48832b70b83ca0cb87b39d6e8de98f26cb365ef18c08e329d31" Feb 02 09:45:03 crc kubenswrapper[4720]: I0202 09:45:03.958932 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500425-f6mrr" Feb 02 09:45:04 crc kubenswrapper[4720]: I0202 09:45:04.685487 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4"] Feb 02 09:45:04 crc kubenswrapper[4720]: I0202 09:45:04.709183 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500380-klbx4"] Feb 02 09:45:04 crc kubenswrapper[4720]: I0202 09:45:04.900797 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e240ae-2392-450b-8913-72694775a55d" path="/var/lib/kubelet/pods/51e240ae-2392-450b-8913-72694775a55d/volumes" Feb 02 09:45:20 crc kubenswrapper[4720]: I0202 09:45:20.116186 4720 scope.go:117] "RemoveContainer" containerID="8cdc5d779f3fc2cabf95ace8867810c435921136415983dd989075473a6a8b39" Feb 02 09:47:17 crc kubenswrapper[4720]: I0202 09:47:17.902511 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:47:17 crc kubenswrapper[4720]: I0202 09:47:17.903193 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:47:47 crc kubenswrapper[4720]: I0202 09:47:47.901573 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:47:47 crc kubenswrapper[4720]: I0202 09:47:47.902157 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:48:17 crc kubenswrapper[4720]: I0202 09:48:17.902456 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:48:17 crc kubenswrapper[4720]: I0202 09:48:17.903092 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:48:17 crc kubenswrapper[4720]: I0202 09:48:17.903145 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:48:17 crc kubenswrapper[4720]: I0202 09:48:17.905555 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:48:17 crc kubenswrapper[4720]: I0202 09:48:17.905639 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" gracePeriod=600 Feb 02 09:48:18 crc kubenswrapper[4720]: E0202 09:48:18.028935 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:48:18 crc kubenswrapper[4720]: I0202 09:48:18.828965 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" exitCode=0 Feb 02 09:48:18 crc kubenswrapper[4720]: I0202 09:48:18.829072 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f"} Feb 02 09:48:18 crc kubenswrapper[4720]: I0202 09:48:18.829296 4720 scope.go:117] "RemoveContainer" containerID="7c075bb8e05e4721c775d31eeddab222f5329da0e0cae8bd26069c66a156a420" Feb 02 09:48:18 crc kubenswrapper[4720]: I0202 09:48:18.830257 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:48:18 crc kubenswrapper[4720]: E0202 09:48:18.830865 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:48:32 crc kubenswrapper[4720]: I0202 09:48:32.886708 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:48:32 crc kubenswrapper[4720]: E0202 09:48:32.887616 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:48:44 crc kubenswrapper[4720]: I0202 09:48:44.887325 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:48:44 crc kubenswrapper[4720]: E0202 09:48:44.888168 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:48:59 crc kubenswrapper[4720]: I0202 09:48:59.886629 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:48:59 crc kubenswrapper[4720]: E0202 09:48:59.887468 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:49:10 crc kubenswrapper[4720]: I0202 09:49:10.886572 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:49:10 crc kubenswrapper[4720]: E0202 09:49:10.887393 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:49:21 crc kubenswrapper[4720]: I0202 09:49:21.886953 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:49:21 crc kubenswrapper[4720]: E0202 09:49:21.887646 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:49:35 crc kubenswrapper[4720]: I0202 09:49:35.886794 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:49:35 crc kubenswrapper[4720]: E0202 09:49:35.887669 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:49:48 crc kubenswrapper[4720]: I0202 09:49:48.888446 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:49:48 crc kubenswrapper[4720]: E0202 09:49:48.889506 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:50:03 crc kubenswrapper[4720]: I0202 09:50:03.887412 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:50:03 crc kubenswrapper[4720]: E0202 09:50:03.888306 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.201300 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqpjh"] Feb 02 09:50:15 crc kubenswrapper[4720]: E0202 09:50:15.202249 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9717b267-32e0-41d9-9ae9-c713df483953" containerName="collect-profiles" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.202265 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9717b267-32e0-41d9-9ae9-c713df483953" containerName="collect-profiles" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.202534 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9717b267-32e0-41d9-9ae9-c713df483953" containerName="collect-profiles" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.204170 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.237022 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqpjh"] Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.292113 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-catalog-content\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.292161 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-utilities\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.292602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7rb\" (UniqueName: \"kubernetes.io/projected/ba2745f9-5be1-4d79-a767-a00103f22d3c-kube-api-access-bc7rb\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.394085 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7rb\" (UniqueName: \"kubernetes.io/projected/ba2745f9-5be1-4d79-a767-a00103f22d3c-kube-api-access-bc7rb\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.394534 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-catalog-content\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.394563 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-utilities\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.394988 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-catalog-content\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.395022 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-utilities\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.419830 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7rb\" (UniqueName: \"kubernetes.io/projected/ba2745f9-5be1-4d79-a767-a00103f22d3c-kube-api-access-bc7rb\") pod \"redhat-operators-zqpjh\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:15 crc kubenswrapper[4720]: I0202 09:50:15.525320 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:16 crc kubenswrapper[4720]: I0202 09:50:16.055802 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqpjh"] Feb 02 09:50:16 crc kubenswrapper[4720]: I0202 09:50:16.871297 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerID="7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036" exitCode=0 Feb 02 09:50:16 crc kubenswrapper[4720]: I0202 09:50:16.871516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerDied","Data":"7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036"} Feb 02 09:50:16 crc kubenswrapper[4720]: I0202 09:50:16.871550 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerStarted","Data":"ef1570dc82b52c7483321c8f0ee8de694d61185b1df623756aafad609bd550b0"} Feb 02 09:50:16 crc kubenswrapper[4720]: I0202 09:50:16.873084 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:50:16 crc kubenswrapper[4720]: I0202 09:50:16.894899 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:50:16 crc kubenswrapper[4720]: E0202 09:50:16.895359 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:50:17 crc kubenswrapper[4720]: I0202 09:50:17.881593 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerStarted","Data":"5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7"} Feb 02 09:50:23 crc kubenswrapper[4720]: I0202 09:50:23.996333 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerID="5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7" exitCode=0 Feb 02 09:50:23 crc kubenswrapper[4720]: I0202 09:50:23.996484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerDied","Data":"5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7"} Feb 02 09:50:25 crc kubenswrapper[4720]: I0202 09:50:25.006829 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerStarted","Data":"09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278"} Feb 02 09:50:25 crc kubenswrapper[4720]: I0202 09:50:25.041647 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqpjh" podStartSLOduration=2.427798142 podStartE2EDuration="10.041625775s" podCreationTimestamp="2026-02-02 09:50:15 +0000 UTC" firstStartedPulling="2026-02-02 09:50:16.872847609 +0000 UTC m=+3250.728473165" lastFinishedPulling="2026-02-02 09:50:24.486675242 +0000 UTC m=+3258.342300798" observedRunningTime="2026-02-02 09:50:25.038103893 +0000 UTC m=+3258.893729449" watchObservedRunningTime="2026-02-02 09:50:25.041625775 +0000 UTC m=+3258.897251341" Feb 02 09:50:25 crc kubenswrapper[4720]: I0202 09:50:25.525644 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:25 crc kubenswrapper[4720]: I0202 09:50:25.525702 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:26 crc kubenswrapper[4720]: I0202 09:50:26.577562 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqpjh" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="registry-server" probeResult="failure" output=< Feb 02 09:50:26 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:50:26 crc kubenswrapper[4720]: > Feb 02 09:50:29 crc kubenswrapper[4720]: I0202 09:50:29.887465 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:50:29 crc kubenswrapper[4720]: E0202 09:50:29.888606 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:50:36 crc kubenswrapper[4720]: I0202 09:50:36.573451 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqpjh" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="registry-server" probeResult="failure" output=< Feb 02 09:50:36 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:50:36 crc kubenswrapper[4720]: > Feb 02 09:50:40 crc kubenswrapper[4720]: I0202 09:50:40.887076 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:50:40 crc kubenswrapper[4720]: E0202 09:50:40.887678 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:50:45 crc kubenswrapper[4720]: I0202 09:50:45.587348 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:45 crc kubenswrapper[4720]: I0202 09:50:45.644692 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:46 crc kubenswrapper[4720]: I0202 09:50:46.402363 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqpjh"] Feb 02 09:50:47 crc kubenswrapper[4720]: I0202 09:50:47.211360 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqpjh" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="registry-server" containerID="cri-o://09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278" gracePeriod=2 Feb 02 09:50:47 crc kubenswrapper[4720]: I0202 09:50:47.907838 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:47.998775 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-utilities\") pod \"ba2745f9-5be1-4d79-a767-a00103f22d3c\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:47.999023 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7rb\" (UniqueName: \"kubernetes.io/projected/ba2745f9-5be1-4d79-a767-a00103f22d3c-kube-api-access-bc7rb\") pod \"ba2745f9-5be1-4d79-a767-a00103f22d3c\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.000056 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-utilities" (OuterVolumeSpecName: "utilities") pod "ba2745f9-5be1-4d79-a767-a00103f22d3c" (UID: "ba2745f9-5be1-4d79-a767-a00103f22d3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.000601 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-catalog-content\") pod \"ba2745f9-5be1-4d79-a767-a00103f22d3c\" (UID: \"ba2745f9-5be1-4d79-a767-a00103f22d3c\") " Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.001213 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.011093 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2745f9-5be1-4d79-a767-a00103f22d3c-kube-api-access-bc7rb" (OuterVolumeSpecName: "kube-api-access-bc7rb") pod "ba2745f9-5be1-4d79-a767-a00103f22d3c" (UID: "ba2745f9-5be1-4d79-a767-a00103f22d3c"). InnerVolumeSpecName "kube-api-access-bc7rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.103893 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7rb\" (UniqueName: \"kubernetes.io/projected/ba2745f9-5be1-4d79-a767-a00103f22d3c-kube-api-access-bc7rb\") on node \"crc\" DevicePath \"\"" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.110006 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba2745f9-5be1-4d79-a767-a00103f22d3c" (UID: "ba2745f9-5be1-4d79-a767-a00103f22d3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.207506 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba2745f9-5be1-4d79-a767-a00103f22d3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.224141 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerID="09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278" exitCode=0 Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.224200 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerDied","Data":"09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278"} Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.224230 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqpjh" event={"ID":"ba2745f9-5be1-4d79-a767-a00103f22d3c","Type":"ContainerDied","Data":"ef1570dc82b52c7483321c8f0ee8de694d61185b1df623756aafad609bd550b0"} Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.224248 4720 scope.go:117] "RemoveContainer" containerID="09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.224394 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqpjh" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.247179 4720 scope.go:117] "RemoveContainer" containerID="5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.273089 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqpjh"] Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.281752 4720 scope.go:117] "RemoveContainer" containerID="7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.289709 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqpjh"] Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.315263 4720 scope.go:117] "RemoveContainer" containerID="09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278" Feb 02 09:50:48 crc kubenswrapper[4720]: E0202 09:50:48.315778 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278\": container with ID starting with 09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278 not found: ID does not exist" containerID="09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.315828 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278"} err="failed to get container status \"09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278\": rpc error: code = NotFound desc = could not find container \"09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278\": container with ID starting with 09acb4a32eb60a7abc6aae40c509919cd589fdb91fd5983103608c76a0612278 not found: ID does not exist" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.315858 4720 scope.go:117] "RemoveContainer" containerID="5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7" Feb 02 09:50:48 crc kubenswrapper[4720]: E0202 09:50:48.316339 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7\": container with ID starting with 5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7 not found: ID does not exist" containerID="5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.316400 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7"} err="failed to get container status \"5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7\": rpc error: code = NotFound desc = could not find container \"5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7\": container with ID starting with 5b0b27237d9603103f50e543936219efabefb52d1b13f35c57cec4be4d6573c7 not found: ID does not exist" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.316435 4720 scope.go:117] "RemoveContainer" containerID="7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036" Feb 02 09:50:48 crc kubenswrapper[4720]: E0202 09:50:48.316762 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036\": container with ID starting with 7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036 not found: ID does not exist" containerID="7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.316833 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036"} err="failed to get container status \"7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036\": rpc error: code = NotFound desc = could not find container \"7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036\": container with ID starting with 7a9bc37d94c47ae136c192f4117fea9e0ee1c852a0b6d5ec091e3e64f2964036 not found: ID does not exist" Feb 02 09:50:48 crc kubenswrapper[4720]: I0202 09:50:48.897060 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" path="/var/lib/kubelet/pods/ba2745f9-5be1-4d79-a767-a00103f22d3c/volumes" Feb 02 09:50:51 crc kubenswrapper[4720]: I0202 09:50:51.887273 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:50:51 crc kubenswrapper[4720]: E0202 09:50:51.888080 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:51:06 crc kubenswrapper[4720]: I0202 09:51:06.893089 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:51:06 crc kubenswrapper[4720]: E0202 09:51:06.894660 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:51:18 crc kubenswrapper[4720]: I0202 09:51:18.890422 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:51:18 crc kubenswrapper[4720]: E0202 09:51:18.891278 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:51:31 crc kubenswrapper[4720]: I0202 09:51:31.887057 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:51:31 crc kubenswrapper[4720]: E0202 09:51:31.887858 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:51:45 crc kubenswrapper[4720]: I0202 09:51:45.887172 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:51:45 crc kubenswrapper[4720]: E0202 09:51:45.887906 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:52:00 crc kubenswrapper[4720]: I0202 09:52:00.887695 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:52:00 crc kubenswrapper[4720]: E0202 09:52:00.888454 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:52:15 crc kubenswrapper[4720]: I0202 09:52:15.887250 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:52:15 crc kubenswrapper[4720]: E0202 09:52:15.888006 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:52:28 crc kubenswrapper[4720]: I0202 09:52:28.887966 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:52:28 crc kubenswrapper[4720]: E0202 09:52:28.889094 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:52:43 crc kubenswrapper[4720]: I0202 09:52:43.887113 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:52:43 crc kubenswrapper[4720]: E0202 09:52:43.888580 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.540203 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54v92"] Feb 02 09:52:46 crc kubenswrapper[4720]: E0202 09:52:46.541775 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="extract-utilities" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.541818 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="extract-utilities" Feb 02 09:52:46 crc kubenswrapper[4720]: E0202 09:52:46.541855 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="registry-server" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.541864 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="registry-server" Feb 02 09:52:46 crc kubenswrapper[4720]: E0202 09:52:46.541913 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="extract-content" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.541924 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="extract-content" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.542189 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2745f9-5be1-4d79-a767-a00103f22d3c" containerName="registry-server" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.545555 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.569648 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54v92"] Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.619777 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6rjg\" (UniqueName: \"kubernetes.io/projected/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-kube-api-access-s6rjg\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.619853 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-catalog-content\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.620118 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-utilities\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.721669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-utilities\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.721893 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6rjg\" (UniqueName: \"kubernetes.io/projected/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-kube-api-access-s6rjg\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.721930 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-catalog-content\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.722866 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-utilities\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.722905 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-catalog-content\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.740671 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6rjg\" (UniqueName: \"kubernetes.io/projected/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-kube-api-access-s6rjg\") pod \"community-operators-54v92\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:46 crc kubenswrapper[4720]: I0202 09:52:46.869558 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:47 crc kubenswrapper[4720]: I0202 09:52:47.522026 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54v92"] Feb 02 09:52:48 crc kubenswrapper[4720]: I0202 09:52:48.349122 4720 generic.go:334] "Generic (PLEG): container finished" podID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerID="061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b" exitCode=0 Feb 02 09:52:48 crc kubenswrapper[4720]: I0202 09:52:48.349414 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerDied","Data":"061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b"} Feb 02 09:52:48 crc kubenswrapper[4720]: I0202 09:52:48.349480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerStarted","Data":"8e00f226f4f56e4e06e7c5486a2c139784464a3521d3480e716471eda9ea6d59"} Feb 02 09:52:49 crc kubenswrapper[4720]: I0202 09:52:49.363284 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerStarted","Data":"467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b"} Feb 02 09:52:52 crc kubenswrapper[4720]: I0202 09:52:52.391143 4720 generic.go:334] "Generic (PLEG): container finished" podID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerID="467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b" exitCode=0 Feb 02 09:52:52 crc kubenswrapper[4720]: I0202 09:52:52.391220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerDied","Data":"467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b"} Feb 02 09:52:53 crc kubenswrapper[4720]: I0202 09:52:53.402713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerStarted","Data":"1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186"} Feb 02 09:52:55 crc kubenswrapper[4720]: I0202 09:52:55.888393 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:52:55 crc kubenswrapper[4720]: E0202 09:52:55.889149 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:52:56 crc kubenswrapper[4720]: I0202 09:52:56.870696 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:56 crc kubenswrapper[4720]: I0202 09:52:56.871173 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:52:57 crc kubenswrapper[4720]: I0202 09:52:57.920831 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-54v92" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="registry-server" probeResult="failure" output=< Feb 02 09:52:57 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:52:57 crc kubenswrapper[4720]: > Feb 02 09:53:06 crc kubenswrapper[4720]: I0202 09:53:06.940937 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:53:06 crc kubenswrapper[4720]: I0202 09:53:06.961902 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54v92" podStartSLOduration=16.524738161 podStartE2EDuration="20.961865529s" podCreationTimestamp="2026-02-02 09:52:46 +0000 UTC" firstStartedPulling="2026-02-02 09:52:48.352143623 +0000 UTC m=+3402.207769179" lastFinishedPulling="2026-02-02 09:52:52.789270991 +0000 UTC m=+3406.644896547" observedRunningTime="2026-02-02 09:52:53.422533893 +0000 UTC m=+3407.278159449" watchObservedRunningTime="2026-02-02 09:53:06.961865529 +0000 UTC m=+3420.817491085" Feb 02 09:53:06 crc kubenswrapper[4720]: I0202 09:53:06.999597 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:53:07 crc kubenswrapper[4720]: I0202 09:53:07.181937 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54v92"] Feb 02 09:53:07 crc kubenswrapper[4720]: I0202 09:53:07.886785 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:53:07 crc kubenswrapper[4720]: E0202 09:53:07.887452 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 09:53:08 crc kubenswrapper[4720]: I0202 09:53:08.527682 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-54v92" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="registry-server" containerID="cri-o://1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186" gracePeriod=2 Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.400553 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.537931 4720 generic.go:334] "Generic (PLEG): container finished" podID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerID="1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186" exitCode=0 Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.537997 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerDied","Data":"1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186"} Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.538095 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54v92" event={"ID":"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c","Type":"ContainerDied","Data":"8e00f226f4f56e4e06e7c5486a2c139784464a3521d3480e716471eda9ea6d59"} Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.538140 4720 scope.go:117] "RemoveContainer" containerID="1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.538035 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54v92" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.554432 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6rjg\" (UniqueName: \"kubernetes.io/projected/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-kube-api-access-s6rjg\") pod \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.554517 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-catalog-content\") pod \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.554550 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-utilities\") pod \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\" (UID: \"da4c8fd5-67af-4c04-8b18-a517c9ce4c1c\") " Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.555481 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-utilities" (OuterVolumeSpecName: "utilities") pod "da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" (UID: "da4c8fd5-67af-4c04-8b18-a517c9ce4c1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.567081 4720 scope.go:117] "RemoveContainer" containerID="467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.571135 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-kube-api-access-s6rjg" (OuterVolumeSpecName: "kube-api-access-s6rjg") pod "da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" (UID: "da4c8fd5-67af-4c04-8b18-a517c9ce4c1c"). InnerVolumeSpecName "kube-api-access-s6rjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.631459 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" (UID: "da4c8fd5-67af-4c04-8b18-a517c9ce4c1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.634094 4720 scope.go:117] "RemoveContainer" containerID="061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.658725 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.658980 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.659093 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6rjg\" (UniqueName: \"kubernetes.io/projected/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c-kube-api-access-s6rjg\") on node \"crc\" DevicePath \"\"" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.687174 4720 scope.go:117] "RemoveContainer" containerID="1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186" Feb 02 09:53:09 crc kubenswrapper[4720]: E0202 09:53:09.687962 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186\": container with ID starting with 1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186 not found: ID does not exist" containerID="1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.688009 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186"} err="failed to get container status \"1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186\": rpc error: code = NotFound desc = could not find container \"1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186\": container with ID starting with 1e7a4c8ad318a2feb139a8a0c0a737512fe988decb3b361a40c74025bc0e9186 not found: ID does not exist" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.688045 4720 scope.go:117] "RemoveContainer" containerID="467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b" Feb 02 09:53:09 crc kubenswrapper[4720]: E0202 09:53:09.688500 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b\": container with ID starting with 467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b not found: ID does not exist" containerID="467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.688637 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b"} err="failed to get container status \"467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b\": rpc error: code = NotFound desc = could not find container \"467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b\": container with ID starting with 467e4c686a01182c40fe849e385d55a9e00256ab792677b7e252d7d35c59611b not found: ID does not exist" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.688759 4720 scope.go:117] "RemoveContainer" containerID="061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b" Feb 02 09:53:09 crc kubenswrapper[4720]: E0202 09:53:09.689316 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b\": container with ID starting with 061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b not found: ID does not exist" containerID="061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.689341 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b"} err="failed to get container status \"061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b\": rpc error: code = NotFound desc = could not find container \"061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b\": container with ID starting with 061d76c5ad0d7f6cb8c6ad8fad0383a29b0791decd263eca1c325f59d5ae764b not found: ID does not exist" Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.871769 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54v92"] Feb 02 09:53:09 crc kubenswrapper[4720]: I0202 09:53:09.880670 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-54v92"] Feb 02 09:53:10 crc kubenswrapper[4720]: I0202 09:53:10.899116 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" path="/var/lib/kubelet/pods/da4c8fd5-67af-4c04-8b18-a517c9ce4c1c/volumes" Feb 02 09:53:19 crc kubenswrapper[4720]: I0202 09:53:19.887061 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:53:20 crc kubenswrapper[4720]: I0202 09:53:20.632073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"2d65e11fb420ae153463d745b33776e3b02d375c31871391b8ce9945ef44a060"} Feb 02 09:55:47 crc kubenswrapper[4720]: I0202 09:55:47.902363 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:55:47 crc kubenswrapper[4720]: I0202 09:55:47.902892 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:56:17 crc kubenswrapper[4720]: I0202 09:56:17.902226 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:56:17 crc kubenswrapper[4720]: I0202 09:56:17.902839 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:56:47 crc kubenswrapper[4720]: I0202 09:56:47.902923 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:56:47 crc kubenswrapper[4720]: I0202 09:56:47.903641 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:56:47 crc kubenswrapper[4720]: I0202 09:56:47.903701 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 09:56:47 crc kubenswrapper[4720]: I0202 09:56:47.904782 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d65e11fb420ae153463d745b33776e3b02d375c31871391b8ce9945ef44a060"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 09:56:47 crc kubenswrapper[4720]: I0202 09:56:47.904915 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://2d65e11fb420ae153463d745b33776e3b02d375c31871391b8ce9945ef44a060" gracePeriod=600 Feb 02 09:56:48 crc kubenswrapper[4720]: I0202 09:56:48.530562 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="2d65e11fb420ae153463d745b33776e3b02d375c31871391b8ce9945ef44a060" exitCode=0 Feb 02 09:56:48 crc kubenswrapper[4720]: I0202 09:56:48.530636 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"2d65e11fb420ae153463d745b33776e3b02d375c31871391b8ce9945ef44a060"} Feb 02 09:56:48 crc kubenswrapper[4720]: I0202 09:56:48.530838 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59"} Feb 02 09:56:48 crc kubenswrapper[4720]: I0202 09:56:48.530860 4720 scope.go:117] "RemoveContainer" containerID="de11f09ef2048722719b28c4421e165ab8d356bb3befa002e8a122f9e4a0ff5f" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.042823 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sc9vn"] Feb 02 09:57:02 crc kubenswrapper[4720]: E0202 09:57:02.043849 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="extract-utilities" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.043869 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="extract-utilities" Feb 02 09:57:02 crc kubenswrapper[4720]: E0202 09:57:02.043902 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="extract-content" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.043911 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="extract-content" Feb 02 09:57:02 crc kubenswrapper[4720]: E0202 09:57:02.043938 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="registry-server" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.043947 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="registry-server" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.044184 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4c8fd5-67af-4c04-8b18-a517c9ce4c1c" containerName="registry-server" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.045845 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.085026 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc9vn"] Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.146301 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfhq6\" (UniqueName: \"kubernetes.io/projected/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-kube-api-access-rfhq6\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.146505 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-catalog-content\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.146677 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-utilities\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.243843 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lq64d"] Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.246350 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.248024 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-catalog-content\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.248243 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-utilities\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.248339 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfhq6\" (UniqueName: \"kubernetes.io/projected/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-kube-api-access-rfhq6\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.248711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-catalog-content\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.248729 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-utilities\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.255057 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq64d"] Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.289233 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfhq6\" (UniqueName: \"kubernetes.io/projected/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-kube-api-access-rfhq6\") pod \"redhat-marketplace-sc9vn\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.349971 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-catalog-content\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.350064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx8t\" (UniqueName: \"kubernetes.io/projected/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-kube-api-access-qpx8t\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.350215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-utilities\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.395847 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.451745 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-utilities\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.451854 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-catalog-content\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.451929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx8t\" (UniqueName: \"kubernetes.io/projected/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-kube-api-access-qpx8t\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.452870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-utilities\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.452963 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-catalog-content\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.473690 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx8t\" (UniqueName: \"kubernetes.io/projected/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-kube-api-access-qpx8t\") pod \"certified-operators-lq64d\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:02 crc kubenswrapper[4720]: I0202 09:57:02.565389 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.137003 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc9vn"] Feb 02 09:57:03 crc kubenswrapper[4720]: W0202 09:57:03.336646 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572f2628_7049_44e3_a10f_9e3dcf0bb0a2.slice/crio-6c17c5692a54c6602ec6eacbc3b73443b85ad596a32dab006919e8891406c97a WatchSource:0}: Error finding container 6c17c5692a54c6602ec6eacbc3b73443b85ad596a32dab006919e8891406c97a: Status 404 returned error can't find the container with id 6c17c5692a54c6602ec6eacbc3b73443b85ad596a32dab006919e8891406c97a Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.337557 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq64d"] Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.704729 4720 generic.go:334] "Generic (PLEG): container finished" podID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerID="35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8" exitCode=0 Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.704806 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerDied","Data":"35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8"} Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.704840 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerStarted","Data":"9cf84070777af3b03dfdd8ea85c220088715906a989373c9424abd61c06f32f7"} Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.708254 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.712295 4720 generic.go:334] "Generic (PLEG): container finished" podID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerID="910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83" exitCode=0 Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.712604 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerDied","Data":"910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83"} Feb 02 09:57:03 crc kubenswrapper[4720]: I0202 09:57:03.712633 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerStarted","Data":"6c17c5692a54c6602ec6eacbc3b73443b85ad596a32dab006919e8891406c97a"} Feb 02 09:57:04 crc kubenswrapper[4720]: I0202 09:57:04.724696 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerStarted","Data":"c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153"} Feb 02 09:57:05 crc kubenswrapper[4720]: I0202 09:57:05.737454 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerStarted","Data":"40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0"} Feb 02 09:57:07 crc kubenswrapper[4720]: I0202 09:57:07.758037 4720 generic.go:334] "Generic (PLEG): container finished" podID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerID="c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153" exitCode=0 Feb 02 09:57:07 crc kubenswrapper[4720]: I0202 09:57:07.758138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerDied","Data":"c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153"} Feb 02 09:57:08 crc kubenswrapper[4720]: I0202 09:57:08.769178 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerStarted","Data":"1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2"} Feb 02 09:57:08 crc kubenswrapper[4720]: I0202 09:57:08.771009 4720 generic.go:334] "Generic (PLEG): container finished" podID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerID="40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0" exitCode=0 Feb 02 09:57:08 crc kubenswrapper[4720]: I0202 09:57:08.771043 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerDied","Data":"40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0"} Feb 02 09:57:08 crc kubenswrapper[4720]: I0202 09:57:08.792689 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sc9vn" podStartSLOduration=2.184651871 podStartE2EDuration="6.792662257s" podCreationTimestamp="2026-02-02 09:57:02 +0000 UTC" firstStartedPulling="2026-02-02 09:57:03.707605712 +0000 UTC m=+3657.563231268" lastFinishedPulling="2026-02-02 09:57:08.315616098 +0000 UTC m=+3662.171241654" observedRunningTime="2026-02-02 09:57:08.789507862 +0000 UTC m=+3662.645133468" watchObservedRunningTime="2026-02-02 09:57:08.792662257 +0000 UTC m=+3662.648287853" Feb 02 09:57:09 crc kubenswrapper[4720]: I0202 09:57:09.781794 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerStarted","Data":"c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9"} Feb 02 09:57:09 crc kubenswrapper[4720]: I0202 09:57:09.802517 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lq64d" podStartSLOduration=2.006257911 podStartE2EDuration="7.802495262s" podCreationTimestamp="2026-02-02 09:57:02 +0000 UTC" firstStartedPulling="2026-02-02 09:57:03.718279939 +0000 UTC m=+3657.573905495" lastFinishedPulling="2026-02-02 09:57:09.51451728 +0000 UTC m=+3663.370142846" observedRunningTime="2026-02-02 09:57:09.796898687 +0000 UTC m=+3663.652524243" watchObservedRunningTime="2026-02-02 09:57:09.802495262 +0000 UTC m=+3663.658120808" Feb 02 09:57:12 crc kubenswrapper[4720]: I0202 09:57:12.396386 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:12 crc kubenswrapper[4720]: I0202 09:57:12.397018 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:12 crc kubenswrapper[4720]: I0202 09:57:12.566198 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:12 crc kubenswrapper[4720]: I0202 09:57:12.566728 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:13 crc kubenswrapper[4720]: I0202 09:57:13.445828 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sc9vn" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="registry-server" probeResult="failure" output=< Feb 02 09:57:13 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:57:13 crc kubenswrapper[4720]: > Feb 02 09:57:13 crc kubenswrapper[4720]: I0202 09:57:13.620985 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lq64d" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="registry-server" probeResult="failure" output=< Feb 02 09:57:13 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:57:13 crc kubenswrapper[4720]: > Feb 02 09:57:22 crc kubenswrapper[4720]: I0202 09:57:22.444865 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:22 crc kubenswrapper[4720]: I0202 09:57:22.502561 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:22 crc kubenswrapper[4720]: I0202 09:57:22.680753 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc9vn"] Feb 02 09:57:23 crc kubenswrapper[4720]: I0202 09:57:23.610281 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lq64d" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="registry-server" probeResult="failure" output=< Feb 02 09:57:23 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 09:57:23 crc kubenswrapper[4720]: > Feb 02 09:57:23 crc kubenswrapper[4720]: I0202 09:57:23.902785 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sc9vn" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="registry-server" containerID="cri-o://1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2" gracePeriod=2 Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.639162 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.738725 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-catalog-content\") pod \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.738928 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfhq6\" (UniqueName: \"kubernetes.io/projected/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-kube-api-access-rfhq6\") pod \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.738989 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-utilities\") pod \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\" (UID: \"55b0ca0a-78bf-4b15-baa0-9361ed44d96f\") " Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.739737 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-utilities" (OuterVolumeSpecName: "utilities") pod "55b0ca0a-78bf-4b15-baa0-9361ed44d96f" (UID: "55b0ca0a-78bf-4b15-baa0-9361ed44d96f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.754283 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-kube-api-access-rfhq6" (OuterVolumeSpecName: "kube-api-access-rfhq6") pod "55b0ca0a-78bf-4b15-baa0-9361ed44d96f" (UID: "55b0ca0a-78bf-4b15-baa0-9361ed44d96f"). InnerVolumeSpecName "kube-api-access-rfhq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.761608 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55b0ca0a-78bf-4b15-baa0-9361ed44d96f" (UID: "55b0ca0a-78bf-4b15-baa0-9361ed44d96f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.841011 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.841058 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfhq6\" (UniqueName: \"kubernetes.io/projected/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-kube-api-access-rfhq6\") on node \"crc\" DevicePath \"\"" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.841073 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b0ca0a-78bf-4b15-baa0-9361ed44d96f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.916151 4720 generic.go:334] "Generic (PLEG): container finished" podID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerID="1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2" exitCode=0 Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.916193 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerDied","Data":"1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2"} Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.916220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc9vn" event={"ID":"55b0ca0a-78bf-4b15-baa0-9361ed44d96f","Type":"ContainerDied","Data":"9cf84070777af3b03dfdd8ea85c220088715906a989373c9424abd61c06f32f7"} Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.916238 4720 scope.go:117] "RemoveContainer" containerID="1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.916254 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc9vn" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.947902 4720 scope.go:117] "RemoveContainer" containerID="c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153" Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.952620 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc9vn"] Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.966453 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc9vn"] Feb 02 09:57:24 crc kubenswrapper[4720]: I0202 09:57:24.971026 4720 scope.go:117] "RemoveContainer" containerID="35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8" Feb 02 09:57:25 crc kubenswrapper[4720]: I0202 09:57:25.020509 4720 scope.go:117] "RemoveContainer" containerID="1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2" Feb 02 09:57:25 crc kubenswrapper[4720]: E0202 09:57:25.023198 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2\": container with ID starting with 1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2 not found: ID does not exist" containerID="1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2" Feb 02 09:57:25 crc kubenswrapper[4720]: I0202 09:57:25.023251 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2"} err="failed to get container status \"1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2\": rpc error: code = NotFound desc = could not find container \"1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2\": container with ID starting with 1b72b59e31d47693cf3e77d57a31f8ab42b9032d975617f675a9de46c8b8a5d2 not found: ID does not exist" Feb 02 09:57:25 crc kubenswrapper[4720]: I0202 09:57:25.023295 4720 scope.go:117] "RemoveContainer" containerID="c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153" Feb 02 09:57:25 crc kubenswrapper[4720]: E0202 09:57:25.023664 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153\": container with ID starting with c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153 not found: ID does not exist" containerID="c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153" Feb 02 09:57:25 crc kubenswrapper[4720]: I0202 09:57:25.023704 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153"} err="failed to get container status \"c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153\": rpc error: code = NotFound desc = could not find container \"c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153\": container with ID starting with c5082fd5e5be38cde397fed515ba152e087b98185497af75143a88756239f153 not found: ID does not exist" Feb 02 09:57:25 crc kubenswrapper[4720]: I0202 09:57:25.023719 4720 scope.go:117] "RemoveContainer" containerID="35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8" Feb 02 09:57:25 crc kubenswrapper[4720]: E0202 09:57:25.023987 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8\": container with ID starting with 35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8 not found: ID does not exist" containerID="35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8" Feb 02 09:57:25 crc kubenswrapper[4720]: I0202 09:57:25.024013 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8"} err="failed to get container status \"35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8\": rpc error: code = NotFound desc = could not find container \"35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8\": container with ID starting with 35e5c75192c970ab7391d2073dcba42a1281f267507a686fd48469dc17f847f8 not found: ID does not exist" Feb 02 09:57:26 crc kubenswrapper[4720]: I0202 09:57:26.898775 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" path="/var/lib/kubelet/pods/55b0ca0a-78bf-4b15-baa0-9361ed44d96f/volumes" Feb 02 09:57:32 crc kubenswrapper[4720]: I0202 09:57:32.612710 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:32 crc kubenswrapper[4720]: I0202 09:57:32.673272 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:33 crc kubenswrapper[4720]: I0202 09:57:33.246600 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq64d"] Feb 02 09:57:33 crc kubenswrapper[4720]: I0202 09:57:33.990246 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lq64d" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="registry-server" containerID="cri-o://c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9" gracePeriod=2 Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.762323 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.831266 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-utilities\") pod \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.831383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpx8t\" (UniqueName: \"kubernetes.io/projected/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-kube-api-access-qpx8t\") pod \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.831563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-catalog-content\") pod \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\" (UID: \"572f2628-7049-44e3-a10f-9e3dcf0bb0a2\") " Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.832868 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-utilities" (OuterVolumeSpecName: "utilities") pod "572f2628-7049-44e3-a10f-9e3dcf0bb0a2" (UID: "572f2628-7049-44e3-a10f-9e3dcf0bb0a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.859761 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-kube-api-access-qpx8t" (OuterVolumeSpecName: "kube-api-access-qpx8t") pod "572f2628-7049-44e3-a10f-9e3dcf0bb0a2" (UID: "572f2628-7049-44e3-a10f-9e3dcf0bb0a2"). InnerVolumeSpecName "kube-api-access-qpx8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.919189 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "572f2628-7049-44e3-a10f-9e3dcf0bb0a2" (UID: "572f2628-7049-44e3-a10f-9e3dcf0bb0a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.934727 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.934774 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpx8t\" (UniqueName: \"kubernetes.io/projected/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-kube-api-access-qpx8t\") on node \"crc\" DevicePath \"\"" Feb 02 09:57:34 crc kubenswrapper[4720]: I0202 09:57:34.934790 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572f2628-7049-44e3-a10f-9e3dcf0bb0a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.000022 4720 generic.go:334] "Generic (PLEG): container finished" podID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerID="c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9" exitCode=0 Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.000074 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq64d" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.000077 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerDied","Data":"c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9"} Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.000185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq64d" event={"ID":"572f2628-7049-44e3-a10f-9e3dcf0bb0a2","Type":"ContainerDied","Data":"6c17c5692a54c6602ec6eacbc3b73443b85ad596a32dab006919e8891406c97a"} Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.000208 4720 scope.go:117] "RemoveContainer" containerID="c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.022163 4720 scope.go:117] "RemoveContainer" containerID="40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.032148 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq64d"] Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.062498 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lq64d"] Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.068045 4720 scope.go:117] "RemoveContainer" containerID="910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.107333 4720 scope.go:117] "RemoveContainer" containerID="c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9" Feb 02 09:57:35 crc kubenswrapper[4720]: E0202 09:57:35.107867 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9\": container with ID starting with c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9 not found: ID does not exist" containerID="c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.108085 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9"} err="failed to get container status \"c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9\": rpc error: code = NotFound desc = could not find container \"c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9\": container with ID starting with c8d4e6014f7741199f0518459efcb965b59f55b9d19372ccb922acea7eafd0f9 not found: ID does not exist" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.108120 4720 scope.go:117] "RemoveContainer" containerID="40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0" Feb 02 09:57:35 crc kubenswrapper[4720]: E0202 09:57:35.109032 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0\": container with ID starting with 40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0 not found: ID does not exist" containerID="40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.109067 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0"} err="failed to get container status \"40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0\": rpc error: code = NotFound desc = could not find container \"40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0\": container with ID starting with 40db8889241338e850a3c7023e0f4ddaec19028fa71dc49c55462c9fa96709a0 not found: ID does not exist" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.109092 4720 scope.go:117] "RemoveContainer" containerID="910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83" Feb 02 09:57:35 crc kubenswrapper[4720]: E0202 09:57:35.109534 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83\": container with ID starting with 910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83 not found: ID does not exist" containerID="910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83" Feb 02 09:57:35 crc kubenswrapper[4720]: I0202 09:57:35.109566 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83"} err="failed to get container status \"910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83\": rpc error: code = NotFound desc = could not find container \"910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83\": container with ID starting with 910c939f2ed1d4869ff7205cd9e22b29b91e857ac4e94dec264871bc22e27b83 not found: ID does not exist" Feb 02 09:57:36 crc kubenswrapper[4720]: I0202 09:57:36.902124 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" path="/var/lib/kubelet/pods/572f2628-7049-44e3-a10f-9e3dcf0bb0a2/volumes" Feb 02 09:59:17 crc kubenswrapper[4720]: I0202 09:59:17.912444 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:59:17 crc kubenswrapper[4720]: I0202 09:59:17.913026 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 09:59:47 crc kubenswrapper[4720]: I0202 09:59:47.902454 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 09:59:47 crc kubenswrapper[4720]: I0202 09:59:47.903080 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.170507 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k"] Feb 02 10:00:00 crc kubenswrapper[4720]: E0202 10:00:00.171665 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="registry-server" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.171680 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="registry-server" Feb 02 10:00:00 crc kubenswrapper[4720]: E0202 10:00:00.171691 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="extract-content" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.171699 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="extract-content" Feb 02 10:00:00 crc kubenswrapper[4720]: E0202 10:00:00.171717 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="registry-server" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.171727 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="registry-server" Feb 02 10:00:00 crc kubenswrapper[4720]: E0202 10:00:00.171749 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="extract-utilities" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.171756 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="extract-utilities" Feb 02 10:00:00 crc kubenswrapper[4720]: E0202 10:00:00.171777 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="extract-content" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.171784 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="extract-content" Feb 02 10:00:00 crc kubenswrapper[4720]: E0202 10:00:00.171811 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="extract-utilities" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.171818 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="extract-utilities" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.172049 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b0ca0a-78bf-4b15-baa0-9361ed44d96f" containerName="registry-server" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.172079 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="572f2628-7049-44e3-a10f-9e3dcf0bb0a2" containerName="registry-server" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.172832 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.175530 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.175771 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.184374 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k"] Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.225992 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8dad58a-25da-42b9-badb-146628294c5c-secret-volume\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.226071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8dad58a-25da-42b9-badb-146628294c5c-config-volume\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.226166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2g5\" (UniqueName: \"kubernetes.io/projected/a8dad58a-25da-42b9-badb-146628294c5c-kube-api-access-kv2g5\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.340632 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2g5\" (UniqueName: \"kubernetes.io/projected/a8dad58a-25da-42b9-badb-146628294c5c-kube-api-access-kv2g5\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.340794 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8dad58a-25da-42b9-badb-146628294c5c-secret-volume\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.340876 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8dad58a-25da-42b9-badb-146628294c5c-config-volume\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.346399 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8dad58a-25da-42b9-badb-146628294c5c-config-volume\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.357692 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2g5\" (UniqueName: \"kubernetes.io/projected/a8dad58a-25da-42b9-badb-146628294c5c-kube-api-access-kv2g5\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.358445 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8dad58a-25da-42b9-badb-146628294c5c-secret-volume\") pod \"collect-profiles-29500440-whx8k\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:00 crc kubenswrapper[4720]: I0202 10:00:00.491693 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:01 crc kubenswrapper[4720]: I0202 10:00:01.049269 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k"] Feb 02 10:00:01 crc kubenswrapper[4720]: I0202 10:00:01.370393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" event={"ID":"a8dad58a-25da-42b9-badb-146628294c5c","Type":"ContainerStarted","Data":"abbf2139bc474fc34fcc12fbb76c2fd40f54b17aad4d2e18a7329d1bfec75ff3"} Feb 02 10:00:01 crc kubenswrapper[4720]: I0202 10:00:01.370445 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" event={"ID":"a8dad58a-25da-42b9-badb-146628294c5c","Type":"ContainerStarted","Data":"6013474d5a23f5c81ab5d210015a3f76c8a88e9d0fce843643d37e01496c5e42"} Feb 02 10:00:01 crc kubenswrapper[4720]: I0202 10:00:01.410850 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" podStartSLOduration=1.410828802 podStartE2EDuration="1.410828802s" podCreationTimestamp="2026-02-02 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:00:01.402546452 +0000 UTC m=+3835.258172008" watchObservedRunningTime="2026-02-02 10:00:01.410828802 +0000 UTC m=+3835.266454348" Feb 02 10:00:02 crc kubenswrapper[4720]: I0202 10:00:02.378641 4720 generic.go:334] "Generic (PLEG): container finished" podID="a8dad58a-25da-42b9-badb-146628294c5c" containerID="abbf2139bc474fc34fcc12fbb76c2fd40f54b17aad4d2e18a7329d1bfec75ff3" exitCode=0 Feb 02 10:00:02 crc kubenswrapper[4720]: I0202 10:00:02.378748 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" event={"ID":"a8dad58a-25da-42b9-badb-146628294c5c","Type":"ContainerDied","Data":"abbf2139bc474fc34fcc12fbb76c2fd40f54b17aad4d2e18a7329d1bfec75ff3"} Feb 02 10:00:03 crc kubenswrapper[4720]: I0202 10:00:03.933152 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.042611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8dad58a-25da-42b9-badb-146628294c5c-secret-volume\") pod \"a8dad58a-25da-42b9-badb-146628294c5c\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.042773 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8dad58a-25da-42b9-badb-146628294c5c-config-volume\") pod \"a8dad58a-25da-42b9-badb-146628294c5c\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.042936 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv2g5\" (UniqueName: \"kubernetes.io/projected/a8dad58a-25da-42b9-badb-146628294c5c-kube-api-access-kv2g5\") pod \"a8dad58a-25da-42b9-badb-146628294c5c\" (UID: \"a8dad58a-25da-42b9-badb-146628294c5c\") " Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.044273 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8dad58a-25da-42b9-badb-146628294c5c-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8dad58a-25da-42b9-badb-146628294c5c" (UID: "a8dad58a-25da-42b9-badb-146628294c5c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.059455 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8dad58a-25da-42b9-badb-146628294c5c-kube-api-access-kv2g5" (OuterVolumeSpecName: "kube-api-access-kv2g5") pod "a8dad58a-25da-42b9-badb-146628294c5c" (UID: "a8dad58a-25da-42b9-badb-146628294c5c"). InnerVolumeSpecName "kube-api-access-kv2g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.078050 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8dad58a-25da-42b9-badb-146628294c5c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a8dad58a-25da-42b9-badb-146628294c5c" (UID: "a8dad58a-25da-42b9-badb-146628294c5c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.148199 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8dad58a-25da-42b9-badb-146628294c5c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.148501 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8dad58a-25da-42b9-badb-146628294c5c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.148514 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv2g5\" (UniqueName: \"kubernetes.io/projected/a8dad58a-25da-42b9-badb-146628294c5c-kube-api-access-kv2g5\") on node \"crc\" DevicePath \"\"" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.398374 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" event={"ID":"a8dad58a-25da-42b9-badb-146628294c5c","Type":"ContainerDied","Data":"6013474d5a23f5c81ab5d210015a3f76c8a88e9d0fce843643d37e01496c5e42"} Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.398420 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6013474d5a23f5c81ab5d210015a3f76c8a88e9d0fce843643d37e01496c5e42" Feb 02 10:00:04 crc kubenswrapper[4720]: I0202 10:00:04.398431 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500440-whx8k" Feb 02 10:00:05 crc kubenswrapper[4720]: I0202 10:00:05.009470 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv"] Feb 02 10:00:05 crc kubenswrapper[4720]: I0202 10:00:05.018606 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500395-dpmkv"] Feb 02 10:00:06 crc kubenswrapper[4720]: I0202 10:00:06.928632 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24895420-6b66-40eb-8ec5-78f761760fe7" path="/var/lib/kubelet/pods/24895420-6b66-40eb-8ec5-78f761760fe7/volumes" Feb 02 10:00:17 crc kubenswrapper[4720]: I0202 10:00:17.904367 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:00:17 crc kubenswrapper[4720]: I0202 10:00:17.904995 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:00:17 crc kubenswrapper[4720]: I0202 10:00:17.905048 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 10:00:17 crc kubenswrapper[4720]: I0202 10:00:17.905847 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:00:17 crc kubenswrapper[4720]: I0202 10:00:17.905928 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" gracePeriod=600 Feb 02 10:00:18 crc kubenswrapper[4720]: E0202 10:00:18.036595 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:00:18 crc kubenswrapper[4720]: I0202 10:00:18.521157 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" exitCode=0 Feb 02 10:00:18 crc kubenswrapper[4720]: I0202 10:00:18.521203 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59"} Feb 02 10:00:18 crc kubenswrapper[4720]: I0202 10:00:18.521240 4720 scope.go:117] "RemoveContainer" containerID="2d65e11fb420ae153463d745b33776e3b02d375c31871391b8ce9945ef44a060" Feb 02 10:00:18 crc kubenswrapper[4720]: I0202 10:00:18.521997 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:00:18 crc kubenswrapper[4720]: E0202 10:00:18.522395 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:00:20 crc kubenswrapper[4720]: I0202 10:00:20.487845 4720 scope.go:117] "RemoveContainer" containerID="749925605a100b9397d2ec9ab0c368e37bddf79696c2b1139426dc46facf1868" Feb 02 10:00:28 crc kubenswrapper[4720]: I0202 10:00:28.888647 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:00:28 crc kubenswrapper[4720]: E0202 10:00:28.889685 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:00:41 crc kubenswrapper[4720]: I0202 10:00:41.887727 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:00:41 crc kubenswrapper[4720]: E0202 10:00:41.890154 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.459708 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8p2w6"] Feb 02 10:00:46 crc kubenswrapper[4720]: E0202 10:00:46.460686 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8dad58a-25da-42b9-badb-146628294c5c" containerName="collect-profiles" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.460702 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8dad58a-25da-42b9-badb-146628294c5c" containerName="collect-profiles" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.460946 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8dad58a-25da-42b9-badb-146628294c5c" containerName="collect-profiles" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.462605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.487500 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8p2w6"] Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.524510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mcz\" (UniqueName: \"kubernetes.io/projected/9aacf2e0-e990-4804-b1d3-7639990fa5f5-kube-api-access-k2mcz\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.524601 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-utilities\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.524689 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-catalog-content\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.627243 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-catalog-content\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.627487 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mcz\" (UniqueName: \"kubernetes.io/projected/9aacf2e0-e990-4804-b1d3-7639990fa5f5-kube-api-access-k2mcz\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.627893 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-catalog-content\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.628057 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-utilities\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.628430 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-utilities\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.653736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mcz\" (UniqueName: \"kubernetes.io/projected/9aacf2e0-e990-4804-b1d3-7639990fa5f5-kube-api-access-k2mcz\") pod \"redhat-operators-8p2w6\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:46 crc kubenswrapper[4720]: I0202 10:00:46.809247 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:47 crc kubenswrapper[4720]: I0202 10:00:47.365473 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8p2w6"] Feb 02 10:00:47 crc kubenswrapper[4720]: I0202 10:00:47.809980 4720 generic.go:334] "Generic (PLEG): container finished" podID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerID="6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c" exitCode=0 Feb 02 10:00:47 crc kubenswrapper[4720]: I0202 10:00:47.810076 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerDied","Data":"6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c"} Feb 02 10:00:47 crc kubenswrapper[4720]: I0202 10:00:47.810250 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerStarted","Data":"3a7eb9d35954f377aee1b0523d6eb34a1377160d9a54c20db2b7fdab132e75cc"} Feb 02 10:00:48 crc kubenswrapper[4720]: I0202 10:00:48.821632 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerStarted","Data":"48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283"} Feb 02 10:00:52 crc kubenswrapper[4720]: I0202 10:00:52.886985 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:00:52 crc kubenswrapper[4720]: E0202 10:00:52.887749 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:00:53 crc kubenswrapper[4720]: I0202 10:00:53.872573 4720 generic.go:334] "Generic (PLEG): container finished" podID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerID="48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283" exitCode=0 Feb 02 10:00:53 crc kubenswrapper[4720]: I0202 10:00:53.872676 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerDied","Data":"48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283"} Feb 02 10:00:54 crc kubenswrapper[4720]: I0202 10:00:54.884397 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerStarted","Data":"a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5"} Feb 02 10:00:54 crc kubenswrapper[4720]: I0202 10:00:54.906467 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8p2w6" podStartSLOduration=2.434619364 podStartE2EDuration="8.906440972s" podCreationTimestamp="2026-02-02 10:00:46 +0000 UTC" firstStartedPulling="2026-02-02 10:00:47.815572832 +0000 UTC m=+3881.671198388" lastFinishedPulling="2026-02-02 10:00:54.28739444 +0000 UTC m=+3888.143019996" observedRunningTime="2026-02-02 10:00:54.900869928 +0000 UTC m=+3888.756495484" watchObservedRunningTime="2026-02-02 10:00:54.906440972 +0000 UTC m=+3888.762066538" Feb 02 10:00:56 crc kubenswrapper[4720]: I0202 10:00:56.809516 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:56 crc kubenswrapper[4720]: I0202 10:00:56.810099 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:00:57 crc kubenswrapper[4720]: I0202 10:00:57.859504 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8p2w6" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="registry-server" probeResult="failure" output=< Feb 02 10:00:57 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 10:00:57 crc kubenswrapper[4720]: > Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.156488 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500441-mkvqn"] Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.168281 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500441-mkvqn"] Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.168376 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.305500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-config-data\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.305650 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbb54\" (UniqueName: \"kubernetes.io/projected/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-kube-api-access-gbb54\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.305701 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-fernet-keys\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.305855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-combined-ca-bundle\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.407549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbb54\" (UniqueName: \"kubernetes.io/projected/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-kube-api-access-gbb54\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.407640 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-fernet-keys\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.407668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-combined-ca-bundle\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.407746 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-config-data\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.425319 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-combined-ca-bundle\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.425474 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-fernet-keys\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.431749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbb54\" (UniqueName: \"kubernetes.io/projected/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-kube-api-access-gbb54\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.444728 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-config-data\") pod \"keystone-cron-29500441-mkvqn\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.491485 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:00 crc kubenswrapper[4720]: I0202 10:01:00.994050 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500441-mkvqn"] Feb 02 10:01:01 crc kubenswrapper[4720]: I0202 10:01:01.956181 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500441-mkvqn" event={"ID":"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb","Type":"ContainerStarted","Data":"eff9e0ffea9b94840b378a2b0445d54bfaa280dd7938308dc770d5333b1f4bde"} Feb 02 10:01:01 crc kubenswrapper[4720]: I0202 10:01:01.956815 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500441-mkvqn" event={"ID":"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb","Type":"ContainerStarted","Data":"ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba"} Feb 02 10:01:01 crc kubenswrapper[4720]: I0202 10:01:01.984956 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500441-mkvqn" podStartSLOduration=1.984931894 podStartE2EDuration="1.984931894s" podCreationTimestamp="2026-02-02 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:01:01.972038824 +0000 UTC m=+3895.827664380" watchObservedRunningTime="2026-02-02 10:01:01.984931894 +0000 UTC m=+3895.840557480" Feb 02 10:01:04 crc kubenswrapper[4720]: I0202 10:01:04.994925 4720 generic.go:334] "Generic (PLEG): container finished" podID="dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" containerID="eff9e0ffea9b94840b378a2b0445d54bfaa280dd7938308dc770d5333b1f4bde" exitCode=0 Feb 02 10:01:04 crc kubenswrapper[4720]: I0202 10:01:04.995001 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500441-mkvqn" event={"ID":"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb","Type":"ContainerDied","Data":"eff9e0ffea9b94840b378a2b0445d54bfaa280dd7938308dc770d5333b1f4bde"} Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.586982 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.743862 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbb54\" (UniqueName: \"kubernetes.io/projected/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-kube-api-access-gbb54\") pod \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.743993 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-fernet-keys\") pod \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.744193 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-config-data\") pod \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.744228 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-combined-ca-bundle\") pod \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\" (UID: \"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb\") " Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.753156 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" (UID: "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.755973 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-kube-api-access-gbb54" (OuterVolumeSpecName: "kube-api-access-gbb54") pod "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" (UID: "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb"). InnerVolumeSpecName "kube-api-access-gbb54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.799428 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" (UID: "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.818536 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-config-data" (OuterVolumeSpecName: "config-data") pod "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" (UID: "dff71476-fbc6-40f0-9bbf-f165ad0d6ccb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.846292 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.846324 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.846336 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbb54\" (UniqueName: \"kubernetes.io/projected/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-kube-api-access-gbb54\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.846346 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dff71476-fbc6-40f0-9bbf-f165ad0d6ccb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.863937 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:01:06 crc kubenswrapper[4720]: I0202 10:01:06.924848 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:01:07 crc kubenswrapper[4720]: I0202 10:01:07.013276 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500441-mkvqn" Feb 02 10:01:07 crc kubenswrapper[4720]: I0202 10:01:07.013269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500441-mkvqn" event={"ID":"dff71476-fbc6-40f0-9bbf-f165ad0d6ccb","Type":"ContainerDied","Data":"ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba"} Feb 02 10:01:07 crc kubenswrapper[4720]: I0202 10:01:07.013332 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba" Feb 02 10:01:07 crc kubenswrapper[4720]: I0202 10:01:07.105570 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8p2w6"] Feb 02 10:01:07 crc kubenswrapper[4720]: I0202 10:01:07.886980 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:01:07 crc kubenswrapper[4720]: E0202 10:01:07.888670 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.020595 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8p2w6" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="registry-server" containerID="cri-o://a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5" gracePeriod=2 Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.770905 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.889558 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2mcz\" (UniqueName: \"kubernetes.io/projected/9aacf2e0-e990-4804-b1d3-7639990fa5f5-kube-api-access-k2mcz\") pod \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.889612 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-catalog-content\") pod \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.889668 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-utilities\") pod \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\" (UID: \"9aacf2e0-e990-4804-b1d3-7639990fa5f5\") " Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.890925 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-utilities" (OuterVolumeSpecName: "utilities") pod "9aacf2e0-e990-4804-b1d3-7639990fa5f5" (UID: "9aacf2e0-e990-4804-b1d3-7639990fa5f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.945716 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aacf2e0-e990-4804-b1d3-7639990fa5f5-kube-api-access-k2mcz" (OuterVolumeSpecName: "kube-api-access-k2mcz") pod "9aacf2e0-e990-4804-b1d3-7639990fa5f5" (UID: "9aacf2e0-e990-4804-b1d3-7639990fa5f5"). InnerVolumeSpecName "kube-api-access-k2mcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.992134 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2mcz\" (UniqueName: \"kubernetes.io/projected/9aacf2e0-e990-4804-b1d3-7639990fa5f5-kube-api-access-k2mcz\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:08 crc kubenswrapper[4720]: I0202 10:01:08.992168 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.031366 4720 generic.go:334] "Generic (PLEG): container finished" podID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerID="a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5" exitCode=0 Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.031476 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerDied","Data":"a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5"} Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.031555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8p2w6" event={"ID":"9aacf2e0-e990-4804-b1d3-7639990fa5f5","Type":"ContainerDied","Data":"3a7eb9d35954f377aee1b0523d6eb34a1377160d9a54c20db2b7fdab132e75cc"} Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.031594 4720 scope.go:117] "RemoveContainer" containerID="a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.032120 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8p2w6" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.033562 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9aacf2e0-e990-4804-b1d3-7639990fa5f5" (UID: "9aacf2e0-e990-4804-b1d3-7639990fa5f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.064728 4720 scope.go:117] "RemoveContainer" containerID="48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.097270 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aacf2e0-e990-4804-b1d3-7639990fa5f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.141128 4720 scope.go:117] "RemoveContainer" containerID="6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.163385 4720 scope.go:117] "RemoveContainer" containerID="a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5" Feb 02 10:01:09 crc kubenswrapper[4720]: E0202 10:01:09.163872 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5\": container with ID starting with a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5 not found: ID does not exist" containerID="a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.163931 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5"} err="failed to get container status \"a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5\": rpc error: code = NotFound desc = could not find container \"a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5\": container with ID starting with a3d0c548ec48c47a714a1a2cec44648356ca501f916e3486173e78d57e2f3fd5 not found: ID does not exist" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.163959 4720 scope.go:117] "RemoveContainer" containerID="48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283" Feb 02 10:01:09 crc kubenswrapper[4720]: E0202 10:01:09.164345 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283\": container with ID starting with 48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283 not found: ID does not exist" containerID="48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.164380 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283"} err="failed to get container status \"48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283\": rpc error: code = NotFound desc = could not find container \"48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283\": container with ID starting with 48816e6235014c76354728972f1ee00fd24d6a0e985a1740607cb04597e08283 not found: ID does not exist" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.164404 4720 scope.go:117] "RemoveContainer" containerID="6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c" Feb 02 10:01:09 crc kubenswrapper[4720]: E0202 10:01:09.164665 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c\": container with ID starting with 6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c not found: ID does not exist" containerID="6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.164707 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c"} err="failed to get container status \"6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c\": rpc error: code = NotFound desc = could not find container \"6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c\": container with ID starting with 6d92b340989eb6d23de5f1b3bea13502fb3f85988f2ebe760d91bca446c16f1c not found: ID does not exist" Feb 02 10:01:09 crc kubenswrapper[4720]: E0202 10:01:09.292848 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice/crio-ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.379714 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8p2w6"] Feb 02 10:01:09 crc kubenswrapper[4720]: I0202 10:01:09.389617 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8p2w6"] Feb 02 10:01:10 crc kubenswrapper[4720]: I0202 10:01:10.897718 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" path="/var/lib/kubelet/pods/9aacf2e0-e990-4804-b1d3-7639990fa5f5/volumes" Feb 02 10:01:18 crc kubenswrapper[4720]: I0202 10:01:18.887934 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:01:18 crc kubenswrapper[4720]: E0202 10:01:18.888805 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:01:19 crc kubenswrapper[4720]: E0202 10:01:19.535051 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice/crio-ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:01:29 crc kubenswrapper[4720]: E0202 10:01:29.771568 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice/crio-ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba\": RecentStats: unable to find data in memory cache]" Feb 02 10:01:29 crc kubenswrapper[4720]: I0202 10:01:29.886676 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:01:29 crc kubenswrapper[4720]: E0202 10:01:29.887055 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:01:40 crc kubenswrapper[4720]: E0202 10:01:40.032503 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice/crio-ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:01:42 crc kubenswrapper[4720]: I0202 10:01:42.886837 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:01:42 crc kubenswrapper[4720]: E0202 10:01:42.887735 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:01:50 crc kubenswrapper[4720]: E0202 10:01:50.303515 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice/crio-ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:01:56 crc kubenswrapper[4720]: I0202 10:01:56.896837 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:01:56 crc kubenswrapper[4720]: E0202 10:01:56.897805 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:02:00 crc kubenswrapper[4720]: E0202 10:02:00.584584 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice/crio-ff7eb74a9ad4ce7e33e47d3de645c328fbbcf0081172159c3ad884aeabd406ba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddff71476_fbc6_40f0_9bbf_f165ad0d6ccb.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:02:11 crc kubenswrapper[4720]: I0202 10:02:11.887442 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:02:11 crc kubenswrapper[4720]: E0202 10:02:11.888280 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:02:25 crc kubenswrapper[4720]: I0202 10:02:25.887950 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:02:25 crc kubenswrapper[4720]: E0202 10:02:25.888691 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:02:39 crc kubenswrapper[4720]: I0202 10:02:39.889823 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:02:39 crc kubenswrapper[4720]: E0202 10:02:39.890561 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:02:50 crc kubenswrapper[4720]: I0202 10:02:50.888004 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:02:50 crc kubenswrapper[4720]: E0202 10:02:50.888954 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.271458 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p57jk"] Feb 02 10:02:53 crc kubenswrapper[4720]: E0202 10:02:53.272344 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" containerName="keystone-cron" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.272356 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" containerName="keystone-cron" Feb 02 10:02:53 crc kubenswrapper[4720]: E0202 10:02:53.272371 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="extract-content" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.272393 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="extract-content" Feb 02 10:02:53 crc kubenswrapper[4720]: E0202 10:02:53.272404 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="extract-utilities" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.272410 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="extract-utilities" Feb 02 10:02:53 crc kubenswrapper[4720]: E0202 10:02:53.272430 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="registry-server" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.272436 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="registry-server" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.272622 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff71476-fbc6-40f0-9bbf-f165ad0d6ccb" containerName="keystone-cron" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.272637 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aacf2e0-e990-4804-b1d3-7639990fa5f5" containerName="registry-server" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.273968 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.292049 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p57jk"] Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.434335 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-catalog-content\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.434576 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-utilities\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.434687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfmg\" (UniqueName: \"kubernetes.io/projected/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-kube-api-access-cjfmg\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.536612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-utilities\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.536724 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfmg\" (UniqueName: \"kubernetes.io/projected/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-kube-api-access-cjfmg\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.536847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-catalog-content\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.537184 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-utilities\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.537295 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-catalog-content\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.557079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfmg\" (UniqueName: \"kubernetes.io/projected/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-kube-api-access-cjfmg\") pod \"community-operators-p57jk\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:53 crc kubenswrapper[4720]: I0202 10:02:53.665303 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:02:54 crc kubenswrapper[4720]: I0202 10:02:54.123321 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p57jk"] Feb 02 10:02:55 crc kubenswrapper[4720]: I0202 10:02:55.104250 4720 generic.go:334] "Generic (PLEG): container finished" podID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerID="ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af" exitCode=0 Feb 02 10:02:55 crc kubenswrapper[4720]: I0202 10:02:55.104516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerDied","Data":"ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af"} Feb 02 10:02:55 crc kubenswrapper[4720]: I0202 10:02:55.105769 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerStarted","Data":"c4f64059543ee285768ac5d4af47a4aeefd3de27c4228b77436859f736211f05"} Feb 02 10:02:55 crc kubenswrapper[4720]: I0202 10:02:55.107644 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:02:56 crc kubenswrapper[4720]: I0202 10:02:56.116829 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerStarted","Data":"08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce"} Feb 02 10:02:58 crc kubenswrapper[4720]: I0202 10:02:58.178092 4720 generic.go:334] "Generic (PLEG): container finished" podID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerID="08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce" exitCode=0 Feb 02 10:02:58 crc kubenswrapper[4720]: I0202 10:02:58.178109 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerDied","Data":"08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce"} Feb 02 10:02:59 crc kubenswrapper[4720]: I0202 10:02:59.192078 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerStarted","Data":"4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412"} Feb 02 10:02:59 crc kubenswrapper[4720]: I0202 10:02:59.223600 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p57jk" podStartSLOduration=2.735076553 podStartE2EDuration="6.223578701s" podCreationTimestamp="2026-02-02 10:02:53 +0000 UTC" firstStartedPulling="2026-02-02 10:02:55.107415831 +0000 UTC m=+4008.963041387" lastFinishedPulling="2026-02-02 10:02:58.595917979 +0000 UTC m=+4012.451543535" observedRunningTime="2026-02-02 10:02:59.210378261 +0000 UTC m=+4013.066003817" watchObservedRunningTime="2026-02-02 10:02:59.223578701 +0000 UTC m=+4013.079204257" Feb 02 10:03:03 crc kubenswrapper[4720]: I0202 10:03:03.666267 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:03:03 crc kubenswrapper[4720]: I0202 10:03:03.668114 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:03:04 crc kubenswrapper[4720]: I0202 10:03:04.315033 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:03:04 crc kubenswrapper[4720]: I0202 10:03:04.378834 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:03:04 crc kubenswrapper[4720]: I0202 10:03:04.561621 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p57jk"] Feb 02 10:03:05 crc kubenswrapper[4720]: I0202 10:03:05.886780 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:03:05 crc kubenswrapper[4720]: E0202 10:03:05.887237 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:03:06 crc kubenswrapper[4720]: I0202 10:03:06.265719 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p57jk" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="registry-server" containerID="cri-o://4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412" gracePeriod=2 Feb 02 10:03:06 crc kubenswrapper[4720]: I0202 10:03:06.957571 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.070716 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-catalog-content\") pod \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.070813 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-utilities\") pod \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.070985 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfmg\" (UniqueName: \"kubernetes.io/projected/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-kube-api-access-cjfmg\") pod \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\" (UID: \"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c\") " Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.074168 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-utilities" (OuterVolumeSpecName: "utilities") pod "3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" (UID: "3f7ac779-0c70-4b81-bd32-f5933d4c4d9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.088113 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-kube-api-access-cjfmg" (OuterVolumeSpecName: "kube-api-access-cjfmg") pod "3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" (UID: "3f7ac779-0c70-4b81-bd32-f5933d4c4d9c"). InnerVolumeSpecName "kube-api-access-cjfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.174335 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfmg\" (UniqueName: \"kubernetes.io/projected/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-kube-api-access-cjfmg\") on node \"crc\" DevicePath \"\"" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.174740 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.281931 4720 generic.go:334] "Generic (PLEG): container finished" podID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerID="4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412" exitCode=0 Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.281975 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerDied","Data":"4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412"} Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.282032 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p57jk" event={"ID":"3f7ac779-0c70-4b81-bd32-f5933d4c4d9c","Type":"ContainerDied","Data":"c4f64059543ee285768ac5d4af47a4aeefd3de27c4228b77436859f736211f05"} Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.282029 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p57jk" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.282073 4720 scope.go:117] "RemoveContainer" containerID="4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.303517 4720 scope.go:117] "RemoveContainer" containerID="08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.325160 4720 scope.go:117] "RemoveContainer" containerID="ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.357645 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" (UID: "3f7ac779-0c70-4b81-bd32-f5933d4c4d9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.379245 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.381709 4720 scope.go:117] "RemoveContainer" containerID="4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412" Feb 02 10:03:07 crc kubenswrapper[4720]: E0202 10:03:07.382180 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412\": container with ID starting with 4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412 not found: ID does not exist" containerID="4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.382224 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412"} err="failed to get container status \"4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412\": rpc error: code = NotFound desc = could not find container \"4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412\": container with ID starting with 4ba219be1d2c4955457af2e0f986817ad53a85c8d934c4f83c090af30283e412 not found: ID does not exist" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.382254 4720 scope.go:117] "RemoveContainer" containerID="08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce" Feb 02 10:03:07 crc kubenswrapper[4720]: E0202 10:03:07.382584 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce\": container with ID starting with 08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce not found: ID does not exist" containerID="08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.382618 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce"} err="failed to get container status \"08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce\": rpc error: code = NotFound desc = could not find container \"08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce\": container with ID starting with 08745e99c78c823be8c0b17d06c8ead3fc701a864d25675a861e39a1c1cb0bce not found: ID does not exist" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.382638 4720 scope.go:117] "RemoveContainer" containerID="ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af" Feb 02 10:03:07 crc kubenswrapper[4720]: E0202 10:03:07.382987 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af\": container with ID starting with ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af not found: ID does not exist" containerID="ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.383114 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af"} err="failed to get container status \"ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af\": rpc error: code = NotFound desc = could not find container \"ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af\": container with ID starting with ad4542d3dfcb5e3b45fac644cbdcc5f3ac24ec95ab415f1cf11da6139fca26af not found: ID does not exist" Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.618158 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p57jk"] Feb 02 10:03:07 crc kubenswrapper[4720]: I0202 10:03:07.627342 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p57jk"] Feb 02 10:03:08 crc kubenswrapper[4720]: I0202 10:03:08.901468 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" path="/var/lib/kubelet/pods/3f7ac779-0c70-4b81-bd32-f5933d4c4d9c/volumes" Feb 02 10:03:18 crc kubenswrapper[4720]: I0202 10:03:18.887321 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:03:18 crc kubenswrapper[4720]: E0202 10:03:18.888432 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:03:33 crc kubenswrapper[4720]: I0202 10:03:33.888080 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:03:33 crc kubenswrapper[4720]: E0202 10:03:33.889302 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:03:44 crc kubenswrapper[4720]: I0202 10:03:44.887386 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:03:44 crc kubenswrapper[4720]: E0202 10:03:44.888366 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:03:58 crc kubenswrapper[4720]: I0202 10:03:58.887336 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:03:58 crc kubenswrapper[4720]: E0202 10:03:58.888245 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:04:10 crc kubenswrapper[4720]: I0202 10:04:10.887252 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:04:10 crc kubenswrapper[4720]: E0202 10:04:10.888004 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:04:23 crc kubenswrapper[4720]: I0202 10:04:23.888152 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:04:23 crc kubenswrapper[4720]: E0202 10:04:23.889173 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:04:34 crc kubenswrapper[4720]: I0202 10:04:34.887481 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:04:34 crc kubenswrapper[4720]: E0202 10:04:34.888227 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:04:49 crc kubenswrapper[4720]: I0202 10:04:49.887138 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:04:49 crc kubenswrapper[4720]: E0202 10:04:49.888087 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:05:02 crc kubenswrapper[4720]: I0202 10:05:02.887362 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:05:02 crc kubenswrapper[4720]: E0202 10:05:02.888156 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:05:16 crc kubenswrapper[4720]: I0202 10:05:16.894319 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:05:16 crc kubenswrapper[4720]: E0202 10:05:16.895384 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:05:28 crc kubenswrapper[4720]: I0202 10:05:28.887973 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:05:29 crc kubenswrapper[4720]: I0202 10:05:29.927497 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"1602c8e4bd83ab9cf2c86cf5c63541135ed09afe54488ac78d195df2566bdebc"} Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.498509 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5jt6w"] Feb 02 10:07:19 crc kubenswrapper[4720]: E0202 10:07:19.500667 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="extract-utilities" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.501041 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="extract-utilities" Feb 02 10:07:19 crc kubenswrapper[4720]: E0202 10:07:19.501137 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="registry-server" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.501215 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="registry-server" Feb 02 10:07:19 crc kubenswrapper[4720]: E0202 10:07:19.501320 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="extract-content" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.501397 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="extract-content" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.501677 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7ac779-0c70-4b81-bd32-f5933d4c4d9c" containerName="registry-server" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.504062 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.507561 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jt6w"] Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.574857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-656zv\" (UniqueName: \"kubernetes.io/projected/391b64ad-5271-40ed-9ef1-43358da39cc3-kube-api-access-656zv\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.574917 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-utilities\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.574975 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-catalog-content\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.677173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-656zv\" (UniqueName: \"kubernetes.io/projected/391b64ad-5271-40ed-9ef1-43358da39cc3-kube-api-access-656zv\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.677227 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-utilities\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.677252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-catalog-content\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.678035 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-catalog-content\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.678092 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-utilities\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.703635 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-656zv\" (UniqueName: \"kubernetes.io/projected/391b64ad-5271-40ed-9ef1-43358da39cc3-kube-api-access-656zv\") pod \"certified-operators-5jt6w\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:19 crc kubenswrapper[4720]: I0202 10:07:19.843103 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:20 crc kubenswrapper[4720]: I0202 10:07:20.365459 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jt6w"] Feb 02 10:07:21 crc kubenswrapper[4720]: I0202 10:07:21.032392 4720 generic.go:334] "Generic (PLEG): container finished" podID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerID="c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd" exitCode=0 Feb 02 10:07:21 crc kubenswrapper[4720]: I0202 10:07:21.032565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerDied","Data":"c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd"} Feb 02 10:07:21 crc kubenswrapper[4720]: I0202 10:07:21.032638 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerStarted","Data":"e4ef2c945ba0c16b0c09e10493440c5d996d0197bda429f1623ad6319539bc3b"} Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.041906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerStarted","Data":"6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698"} Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.704079 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tkk5q"] Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.707633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.713327 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkk5q"] Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.841220 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gw4\" (UniqueName: \"kubernetes.io/projected/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-kube-api-access-j9gw4\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.841356 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-catalog-content\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.841419 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-utilities\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.943763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9gw4\" (UniqueName: \"kubernetes.io/projected/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-kube-api-access-j9gw4\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.943866 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-catalog-content\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.943956 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-utilities\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.944554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-catalog-content\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:22 crc kubenswrapper[4720]: I0202 10:07:22.944606 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-utilities\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:23 crc kubenswrapper[4720]: I0202 10:07:23.169776 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9gw4\" (UniqueName: \"kubernetes.io/projected/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-kube-api-access-j9gw4\") pod \"redhat-marketplace-tkk5q\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:23 crc kubenswrapper[4720]: I0202 10:07:23.379861 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:23 crc kubenswrapper[4720]: I0202 10:07:23.892526 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkk5q"] Feb 02 10:07:23 crc kubenswrapper[4720]: W0202 10:07:23.895427 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa00d8bf_aa6c_46cd_a311_134e76b4c5f6.slice/crio-8620f92d5f0ffc9fcd558fa2e0740a9fe60ca5c42d30926b71c41e994348ddee WatchSource:0}: Error finding container 8620f92d5f0ffc9fcd558fa2e0740a9fe60ca5c42d30926b71c41e994348ddee: Status 404 returned error can't find the container with id 8620f92d5f0ffc9fcd558fa2e0740a9fe60ca5c42d30926b71c41e994348ddee Feb 02 10:07:24 crc kubenswrapper[4720]: I0202 10:07:24.060676 4720 generic.go:334] "Generic (PLEG): container finished" podID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerID="6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698" exitCode=0 Feb 02 10:07:24 crc kubenswrapper[4720]: I0202 10:07:24.060740 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerDied","Data":"6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698"} Feb 02 10:07:24 crc kubenswrapper[4720]: I0202 10:07:24.066311 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerStarted","Data":"8620f92d5f0ffc9fcd558fa2e0740a9fe60ca5c42d30926b71c41e994348ddee"} Feb 02 10:07:25 crc kubenswrapper[4720]: I0202 10:07:25.076088 4720 generic.go:334] "Generic (PLEG): container finished" podID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerID="168e75b17aeb2f32ef9b5f80ef47d145d089cf4091dbe3bd7c8029d5d15a7631" exitCode=0 Feb 02 10:07:25 crc kubenswrapper[4720]: I0202 10:07:25.076184 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerDied","Data":"168e75b17aeb2f32ef9b5f80ef47d145d089cf4091dbe3bd7c8029d5d15a7631"} Feb 02 10:07:25 crc kubenswrapper[4720]: I0202 10:07:25.079924 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerStarted","Data":"f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a"} Feb 02 10:07:25 crc kubenswrapper[4720]: I0202 10:07:25.121418 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5jt6w" podStartSLOduration=2.682785532 podStartE2EDuration="6.12139802s" podCreationTimestamp="2026-02-02 10:07:19 +0000 UTC" firstStartedPulling="2026-02-02 10:07:21.034750336 +0000 UTC m=+4274.890375892" lastFinishedPulling="2026-02-02 10:07:24.473362824 +0000 UTC m=+4278.328988380" observedRunningTime="2026-02-02 10:07:25.115283742 +0000 UTC m=+4278.970909308" watchObservedRunningTime="2026-02-02 10:07:25.12139802 +0000 UTC m=+4278.977023576" Feb 02 10:07:26 crc kubenswrapper[4720]: I0202 10:07:26.090536 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerStarted","Data":"911317b14fa7361fe005719e6077c536f2e579e52c1aa0545167e3d70381f82b"} Feb 02 10:07:27 crc kubenswrapper[4720]: I0202 10:07:27.106597 4720 generic.go:334] "Generic (PLEG): container finished" podID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerID="911317b14fa7361fe005719e6077c536f2e579e52c1aa0545167e3d70381f82b" exitCode=0 Feb 02 10:07:27 crc kubenswrapper[4720]: I0202 10:07:27.106818 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerDied","Data":"911317b14fa7361fe005719e6077c536f2e579e52c1aa0545167e3d70381f82b"} Feb 02 10:07:28 crc kubenswrapper[4720]: I0202 10:07:28.133688 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerStarted","Data":"25013c1c5d6ef55d492831fda11405e86df2a0d3a2117dfb7b8c9655dc73670d"} Feb 02 10:07:28 crc kubenswrapper[4720]: I0202 10:07:28.160160 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tkk5q" podStartSLOduration=3.515560877 podStartE2EDuration="6.160141064s" podCreationTimestamp="2026-02-02 10:07:22 +0000 UTC" firstStartedPulling="2026-02-02 10:07:25.078180342 +0000 UTC m=+4278.933805898" lastFinishedPulling="2026-02-02 10:07:27.722760489 +0000 UTC m=+4281.578386085" observedRunningTime="2026-02-02 10:07:28.150808387 +0000 UTC m=+4282.006433943" watchObservedRunningTime="2026-02-02 10:07:28.160141064 +0000 UTC m=+4282.015766610" Feb 02 10:07:29 crc kubenswrapper[4720]: I0202 10:07:29.844022 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:29 crc kubenswrapper[4720]: I0202 10:07:29.844645 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:29 crc kubenswrapper[4720]: I0202 10:07:29.912544 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:30 crc kubenswrapper[4720]: I0202 10:07:30.204815 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.087361 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jt6w"] Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.088072 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5jt6w" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="registry-server" containerID="cri-o://f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a" gracePeriod=2 Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.380620 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.381032 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.443302 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.669052 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.782445 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-catalog-content\") pod \"391b64ad-5271-40ed-9ef1-43358da39cc3\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.782496 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-utilities\") pod \"391b64ad-5271-40ed-9ef1-43358da39cc3\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.782710 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-656zv\" (UniqueName: \"kubernetes.io/projected/391b64ad-5271-40ed-9ef1-43358da39cc3-kube-api-access-656zv\") pod \"391b64ad-5271-40ed-9ef1-43358da39cc3\" (UID: \"391b64ad-5271-40ed-9ef1-43358da39cc3\") " Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.783481 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-utilities" (OuterVolumeSpecName: "utilities") pod "391b64ad-5271-40ed-9ef1-43358da39cc3" (UID: "391b64ad-5271-40ed-9ef1-43358da39cc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.832599 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "391b64ad-5271-40ed-9ef1-43358da39cc3" (UID: "391b64ad-5271-40ed-9ef1-43358da39cc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.885687 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:07:33 crc kubenswrapper[4720]: I0202 10:07:33.885723 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/391b64ad-5271-40ed-9ef1-43358da39cc3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.169715 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391b64ad-5271-40ed-9ef1-43358da39cc3-kube-api-access-656zv" (OuterVolumeSpecName: "kube-api-access-656zv") pod "391b64ad-5271-40ed-9ef1-43358da39cc3" (UID: "391b64ad-5271-40ed-9ef1-43358da39cc3"). InnerVolumeSpecName "kube-api-access-656zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.190736 4720 generic.go:334] "Generic (PLEG): container finished" podID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerID="f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a" exitCode=0 Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.190812 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerDied","Data":"f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a"} Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.190854 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jt6w" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.190947 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jt6w" event={"ID":"391b64ad-5271-40ed-9ef1-43358da39cc3","Type":"ContainerDied","Data":"e4ef2c945ba0c16b0c09e10493440c5d996d0197bda429f1623ad6319539bc3b"} Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.190977 4720 scope.go:117] "RemoveContainer" containerID="f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.193079 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-656zv\" (UniqueName: \"kubernetes.io/projected/391b64ad-5271-40ed-9ef1-43358da39cc3-kube-api-access-656zv\") on node \"crc\" DevicePath \"\"" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.231226 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jt6w"] Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.236081 4720 scope.go:117] "RemoveContainer" containerID="6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.241581 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5jt6w"] Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.258954 4720 scope.go:117] "RemoveContainer" containerID="c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.275005 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.316984 4720 scope.go:117] "RemoveContainer" containerID="f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a" Feb 02 10:07:34 crc kubenswrapper[4720]: E0202 10:07:34.317431 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a\": container with ID starting with f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a not found: ID does not exist" containerID="f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.317463 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a"} err="failed to get container status \"f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a\": rpc error: code = NotFound desc = could not find container \"f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a\": container with ID starting with f63b645ba85016ecf3942f45328f4e29a93e81e96b74e6b081fd0faf537d968a not found: ID does not exist" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.317485 4720 scope.go:117] "RemoveContainer" containerID="6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698" Feb 02 10:07:34 crc kubenswrapper[4720]: E0202 10:07:34.317806 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698\": container with ID starting with 6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698 not found: ID does not exist" containerID="6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.317845 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698"} err="failed to get container status \"6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698\": rpc error: code = NotFound desc = could not find container \"6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698\": container with ID starting with 6442d27e1122bb19e56ca44d1da7a77ea407c019c37a49405c421d56230aa698 not found: ID does not exist" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.317866 4720 scope.go:117] "RemoveContainer" containerID="c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd" Feb 02 10:07:34 crc kubenswrapper[4720]: E0202 10:07:34.318488 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd\": container with ID starting with c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd not found: ID does not exist" containerID="c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.318516 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd"} err="failed to get container status \"c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd\": rpc error: code = NotFound desc = could not find container \"c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd\": container with ID starting with c274b4191cdb0c29cf845ff1cff408414503fedd86b9f0a5d9986afd9bc3c8fd not found: ID does not exist" Feb 02 10:07:34 crc kubenswrapper[4720]: I0202 10:07:34.902968 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" path="/var/lib/kubelet/pods/391b64ad-5271-40ed-9ef1-43358da39cc3/volumes" Feb 02 10:07:36 crc kubenswrapper[4720]: I0202 10:07:36.906249 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkk5q"] Feb 02 10:07:36 crc kubenswrapper[4720]: I0202 10:07:36.906952 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tkk5q" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="registry-server" containerID="cri-o://25013c1c5d6ef55d492831fda11405e86df2a0d3a2117dfb7b8c9655dc73670d" gracePeriod=2 Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.221228 4720 generic.go:334] "Generic (PLEG): container finished" podID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerID="25013c1c5d6ef55d492831fda11405e86df2a0d3a2117dfb7b8c9655dc73670d" exitCode=0 Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.221860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerDied","Data":"25013c1c5d6ef55d492831fda11405e86df2a0d3a2117dfb7b8c9655dc73670d"} Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.506913 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.675398 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-catalog-content\") pod \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.675483 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-utilities\") pod \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.675559 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9gw4\" (UniqueName: \"kubernetes.io/projected/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-kube-api-access-j9gw4\") pod \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\" (UID: \"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6\") " Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.677370 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-utilities" (OuterVolumeSpecName: "utilities") pod "fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" (UID: "fa00d8bf-aa6c-46cd-a311-134e76b4c5f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.688640 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-kube-api-access-j9gw4" (OuterVolumeSpecName: "kube-api-access-j9gw4") pod "fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" (UID: "fa00d8bf-aa6c-46cd-a311-134e76b4c5f6"). InnerVolumeSpecName "kube-api-access-j9gw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.716443 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" (UID: "fa00d8bf-aa6c-46cd-a311-134e76b4c5f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.779433 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.779471 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:07:37 crc kubenswrapper[4720]: I0202 10:07:37.779484 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9gw4\" (UniqueName: \"kubernetes.io/projected/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6-kube-api-access-j9gw4\") on node \"crc\" DevicePath \"\"" Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.232972 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tkk5q" event={"ID":"fa00d8bf-aa6c-46cd-a311-134e76b4c5f6","Type":"ContainerDied","Data":"8620f92d5f0ffc9fcd558fa2e0740a9fe60ca5c42d30926b71c41e994348ddee"} Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.233022 4720 scope.go:117] "RemoveContainer" containerID="25013c1c5d6ef55d492831fda11405e86df2a0d3a2117dfb7b8c9655dc73670d" Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.233190 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tkk5q" Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.271917 4720 scope.go:117] "RemoveContainer" containerID="911317b14fa7361fe005719e6077c536f2e579e52c1aa0545167e3d70381f82b" Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.278468 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkk5q"] Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.289164 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tkk5q"] Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.289870 4720 scope.go:117] "RemoveContainer" containerID="168e75b17aeb2f32ef9b5f80ef47d145d089cf4091dbe3bd7c8029d5d15a7631" Feb 02 10:07:38 crc kubenswrapper[4720]: I0202 10:07:38.902559 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" path="/var/lib/kubelet/pods/fa00d8bf-aa6c-46cd-a311-134e76b4c5f6/volumes" Feb 02 10:07:47 crc kubenswrapper[4720]: I0202 10:07:47.902109 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:07:47 crc kubenswrapper[4720]: I0202 10:07:47.902817 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:08:17 crc kubenswrapper[4720]: I0202 10:08:17.902380 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:08:17 crc kubenswrapper[4720]: I0202 10:08:17.903008 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:08:47 crc kubenswrapper[4720]: I0202 10:08:47.902264 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:08:47 crc kubenswrapper[4720]: I0202 10:08:47.902823 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:08:47 crc kubenswrapper[4720]: I0202 10:08:47.902869 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 10:08:47 crc kubenswrapper[4720]: I0202 10:08:47.903783 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1602c8e4bd83ab9cf2c86cf5c63541135ed09afe54488ac78d195df2566bdebc"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:08:47 crc kubenswrapper[4720]: I0202 10:08:47.903869 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://1602c8e4bd83ab9cf2c86cf5c63541135ed09afe54488ac78d195df2566bdebc" gracePeriod=600 Feb 02 10:08:48 crc kubenswrapper[4720]: I0202 10:08:48.882669 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="1602c8e4bd83ab9cf2c86cf5c63541135ed09afe54488ac78d195df2566bdebc" exitCode=0 Feb 02 10:08:48 crc kubenswrapper[4720]: I0202 10:08:48.882931 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"1602c8e4bd83ab9cf2c86cf5c63541135ed09afe54488ac78d195df2566bdebc"} Feb 02 10:08:48 crc kubenswrapper[4720]: I0202 10:08:48.883124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556"} Feb 02 10:08:48 crc kubenswrapper[4720]: I0202 10:08:48.883142 4720 scope.go:117] "RemoveContainer" containerID="7a943aa7a5492e9b59a92fffe97353312e34c8346a43bf42cce0967d39409e59" Feb 02 10:11:12 crc kubenswrapper[4720]: I0202 10:11:12.351185 4720 generic.go:334] "Generic (PLEG): container finished" podID="daa51d24-e496-4a32-88c3-89ef00451e74" containerID="b0e87b0b9bb04bc128260ef6b26cf1ce671d7dde0622aff2b7082802ddaabac3" exitCode=0 Feb 02 10:11:12 crc kubenswrapper[4720]: I0202 10:11:12.351305 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"daa51d24-e496-4a32-88c3-89ef00451e74","Type":"ContainerDied","Data":"b0e87b0b9bb04bc128260ef6b26cf1ce671d7dde0622aff2b7082802ddaabac3"} Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.792425 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.963821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.963958 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-config-data\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964029 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964103 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config-secret\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964132 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ssh-key\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964236 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-workdir\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964274 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qss8k\" (UniqueName: \"kubernetes.io/projected/daa51d24-e496-4a32-88c3-89ef00451e74-kube-api-access-qss8k\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964294 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-temporary\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.964389 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ca-certs\") pod \"daa51d24-e496-4a32-88c3-89ef00451e74\" (UID: \"daa51d24-e496-4a32-88c3-89ef00451e74\") " Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.966987 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.968637 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-config-data" (OuterVolumeSpecName: "config-data") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.970968 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.972407 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa51d24-e496-4a32-88c3-89ef00451e74-kube-api-access-qss8k" (OuterVolumeSpecName: "kube-api-access-qss8k") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "kube-api-access-qss8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:11:13 crc kubenswrapper[4720]: I0202 10:11:13.973824 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.011654 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.012105 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.022317 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067100 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067130 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067141 4720 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067152 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qss8k\" (UniqueName: \"kubernetes.io/projected/daa51d24-e496-4a32-88c3-89ef00451e74-kube-api-access-qss8k\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067161 4720 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/daa51d24-e496-4a32-88c3-89ef00451e74-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067169 4720 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/daa51d24-e496-4a32-88c3-89ef00451e74-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067664 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.067681 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.086472 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.131901 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "daa51d24-e496-4a32-88c3-89ef00451e74" (UID: "daa51d24-e496-4a32-88c3-89ef00451e74"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.171116 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.171151 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/daa51d24-e496-4a32-88c3-89ef00451e74-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.377783 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"daa51d24-e496-4a32-88c3-89ef00451e74","Type":"ContainerDied","Data":"fa86e6dc66697660768d9ffebf013d2d501ca7564e9d4a159d0eadb43f40fbd0"} Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.377826 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa86e6dc66697660768d9ffebf013d2d501ca7564e9d4a159d0eadb43f40fbd0" Feb 02 10:11:14 crc kubenswrapper[4720]: I0202 10:11:14.377916 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.052496 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053480 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="extract-content" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053501 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="extract-content" Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053526 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="extract-content" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053536 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="extract-content" Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053561 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="extract-utilities" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053569 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="extract-utilities" Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053578 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="registry-server" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053585 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="registry-server" Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053597 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="extract-utilities" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053604 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="extract-utilities" Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053632 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa51d24-e496-4a32-88c3-89ef00451e74" containerName="tempest-tests-tempest-tests-runner" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053639 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa51d24-e496-4a32-88c3-89ef00451e74" containerName="tempest-tests-tempest-tests-runner" Feb 02 10:11:16 crc kubenswrapper[4720]: E0202 10:11:16.053652 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="registry-server" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053660 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="registry-server" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053870 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="391b64ad-5271-40ed-9ef1-43358da39cc3" containerName="registry-server" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053927 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa00d8bf-aa6c-46cd-a311-134e76b4c5f6" containerName="registry-server" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.053959 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa51d24-e496-4a32-88c3-89ef00451e74" containerName="tempest-tests-tempest-tests-runner" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.054837 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.068486 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.225394 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgp6d\" (UniqueName: \"kubernetes.io/projected/b4b6f688-8a45-4cc3-8677-c83761f14947-kube-api-access-tgp6d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.225714 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.328556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.328845 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgp6d\" (UniqueName: \"kubernetes.io/projected/b4b6f688-8a45-4cc3-8677-c83761f14947-kube-api-access-tgp6d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.329393 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.358629 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgp6d\" (UniqueName: \"kubernetes.io/projected/b4b6f688-8a45-4cc3-8677-c83761f14947-kube-api-access-tgp6d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.371579 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b4b6f688-8a45-4cc3-8677-c83761f14947\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.387718 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.840030 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 10:11:16 crc kubenswrapper[4720]: I0202 10:11:16.846906 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:11:17 crc kubenswrapper[4720]: I0202 10:11:17.409854 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b4b6f688-8a45-4cc3-8677-c83761f14947","Type":"ContainerStarted","Data":"6cbed057956649b07ec07f36299bcbed104667fb6f4efce1a72ddd80fb1dc0d4"} Feb 02 10:11:17 crc kubenswrapper[4720]: I0202 10:11:17.902324 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:11:17 crc kubenswrapper[4720]: I0202 10:11:17.902434 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:11:18 crc kubenswrapper[4720]: I0202 10:11:18.423765 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b4b6f688-8a45-4cc3-8677-c83761f14947","Type":"ContainerStarted","Data":"1da2c0cb5f18324e86001e80aae45e6e4bb8d5f4c469378ad13282c86a5a8668"} Feb 02 10:11:18 crc kubenswrapper[4720]: I0202 10:11:18.448488 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.406717176 podStartE2EDuration="2.448467105s" podCreationTimestamp="2026-02-02 10:11:16 +0000 UTC" firstStartedPulling="2026-02-02 10:11:16.846544564 +0000 UTC m=+4510.702170130" lastFinishedPulling="2026-02-02 10:11:17.888294463 +0000 UTC m=+4511.743920059" observedRunningTime="2026-02-02 10:11:18.439821304 +0000 UTC m=+4512.295446900" watchObservedRunningTime="2026-02-02 10:11:18.448467105 +0000 UTC m=+4512.304092671" Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.876773 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zwx74/must-gather-488gp"] Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.880646 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.883422 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zwx74"/"kube-root-ca.crt" Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.883672 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zwx74"/"openshift-service-ca.crt" Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.883905 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zwx74"/"default-dockercfg-fxkqk" Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.899356 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zwx74/must-gather-488gp"] Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.960583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s67q\" (UniqueName: \"kubernetes.io/projected/c1f67a28-54b6-4a3e-bace-22284dc415da-kube-api-access-5s67q\") pod \"must-gather-488gp\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:44 crc kubenswrapper[4720]: I0202 10:11:44.960770 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1f67a28-54b6-4a3e-bace-22284dc415da-must-gather-output\") pod \"must-gather-488gp\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.062148 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1f67a28-54b6-4a3e-bace-22284dc415da-must-gather-output\") pod \"must-gather-488gp\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.062303 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s67q\" (UniqueName: \"kubernetes.io/projected/c1f67a28-54b6-4a3e-bace-22284dc415da-kube-api-access-5s67q\") pod \"must-gather-488gp\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.062700 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1f67a28-54b6-4a3e-bace-22284dc415da-must-gather-output\") pod \"must-gather-488gp\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.090747 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s67q\" (UniqueName: \"kubernetes.io/projected/c1f67a28-54b6-4a3e-bace-22284dc415da-kube-api-access-5s67q\") pod \"must-gather-488gp\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.202431 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.704325 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zwx74/must-gather-488gp"] Feb 02 10:11:45 crc kubenswrapper[4720]: I0202 10:11:45.737057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/must-gather-488gp" event={"ID":"c1f67a28-54b6-4a3e-bace-22284dc415da","Type":"ContainerStarted","Data":"8a0a05de44f32b9b1132013bde3d77db738b3bcf1f7b9369f4452b6fce3fce36"} Feb 02 10:11:47 crc kubenswrapper[4720]: I0202 10:11:47.902785 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:11:47 crc kubenswrapper[4720]: I0202 10:11:47.903741 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:11:50 crc kubenswrapper[4720]: I0202 10:11:50.790705 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/must-gather-488gp" event={"ID":"c1f67a28-54b6-4a3e-bace-22284dc415da","Type":"ContainerStarted","Data":"ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307"} Feb 02 10:11:50 crc kubenswrapper[4720]: I0202 10:11:50.791092 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/must-gather-488gp" event={"ID":"c1f67a28-54b6-4a3e-bace-22284dc415da","Type":"ContainerStarted","Data":"adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d"} Feb 02 10:11:50 crc kubenswrapper[4720]: I0202 10:11:50.823521 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zwx74/must-gather-488gp" podStartSLOduration=2.878351134 podStartE2EDuration="6.823494634s" podCreationTimestamp="2026-02-02 10:11:44 +0000 UTC" firstStartedPulling="2026-02-02 10:11:45.718779471 +0000 UTC m=+4539.574405027" lastFinishedPulling="2026-02-02 10:11:49.663922971 +0000 UTC m=+4543.519548527" observedRunningTime="2026-02-02 10:11:50.814484713 +0000 UTC m=+4544.670110299" watchObservedRunningTime="2026-02-02 10:11:50.823494634 +0000 UTC m=+4544.679120210" Feb 02 10:11:54 crc kubenswrapper[4720]: E0202 10:11:54.196999 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:43282->38.102.83.177:44747: write tcp 38.102.83.177:43282->38.102.83.177:44747: write: connection reset by peer Feb 02 10:11:55 crc kubenswrapper[4720]: I0202 10:11:55.769645 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zwx74/crc-debug-925pb"] Feb 02 10:11:55 crc kubenswrapper[4720]: I0202 10:11:55.771215 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:55 crc kubenswrapper[4720]: I0202 10:11:55.908466 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mzm\" (UniqueName: \"kubernetes.io/projected/6ee3d432-e07b-4dca-867c-7309d890f903-kube-api-access-p2mzm\") pod \"crc-debug-925pb\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:55 crc kubenswrapper[4720]: I0202 10:11:55.908567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ee3d432-e07b-4dca-867c-7309d890f903-host\") pod \"crc-debug-925pb\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:56 crc kubenswrapper[4720]: I0202 10:11:56.009807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ee3d432-e07b-4dca-867c-7309d890f903-host\") pod \"crc-debug-925pb\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:56 crc kubenswrapper[4720]: I0202 10:11:56.009972 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ee3d432-e07b-4dca-867c-7309d890f903-host\") pod \"crc-debug-925pb\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:56 crc kubenswrapper[4720]: I0202 10:11:56.010007 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mzm\" (UniqueName: \"kubernetes.io/projected/6ee3d432-e07b-4dca-867c-7309d890f903-kube-api-access-p2mzm\") pod \"crc-debug-925pb\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:56 crc kubenswrapper[4720]: I0202 10:11:56.047391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mzm\" (UniqueName: \"kubernetes.io/projected/6ee3d432-e07b-4dca-867c-7309d890f903-kube-api-access-p2mzm\") pod \"crc-debug-925pb\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:56 crc kubenswrapper[4720]: I0202 10:11:56.087519 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:11:56 crc kubenswrapper[4720]: I0202 10:11:56.856544 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-925pb" event={"ID":"6ee3d432-e07b-4dca-867c-7309d890f903","Type":"ContainerStarted","Data":"9a81f35650bfd969e0057494accb8877e762626ab13fa56df16aba34c01941f2"} Feb 02 10:12:07 crc kubenswrapper[4720]: I0202 10:12:07.964217 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-925pb" event={"ID":"6ee3d432-e07b-4dca-867c-7309d890f903","Type":"ContainerStarted","Data":"192f2b7687f86fd49656229b5daf1f6b04d75b72468290db78655f17dabf73e6"} Feb 02 10:12:08 crc kubenswrapper[4720]: I0202 10:12:08.993336 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zwx74/crc-debug-925pb" podStartSLOduration=2.397875922 podStartE2EDuration="13.993318122s" podCreationTimestamp="2026-02-02 10:11:55 +0000 UTC" firstStartedPulling="2026-02-02 10:11:56.119861641 +0000 UTC m=+4549.975487197" lastFinishedPulling="2026-02-02 10:12:07.715303841 +0000 UTC m=+4561.570929397" observedRunningTime="2026-02-02 10:12:08.985786117 +0000 UTC m=+4562.841411673" watchObservedRunningTime="2026-02-02 10:12:08.993318122 +0000 UTC m=+4562.848943678" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.450824 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcqwl"] Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.453266 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.461161 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcqwl"] Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.531225 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bg6s\" (UniqueName: \"kubernetes.io/projected/51156a08-e6e9-41b0-91d2-053b1f076a83-kube-api-access-9bg6s\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.531479 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-catalog-content\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.531788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-utilities\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.633997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-catalog-content\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.634103 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-utilities\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.634157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bg6s\" (UniqueName: \"kubernetes.io/projected/51156a08-e6e9-41b0-91d2-053b1f076a83-kube-api-access-9bg6s\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.634453 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-catalog-content\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.634547 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-utilities\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.678311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bg6s\" (UniqueName: \"kubernetes.io/projected/51156a08-e6e9-41b0-91d2-053b1f076a83-kube-api-access-9bg6s\") pod \"redhat-operators-hcqwl\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:12 crc kubenswrapper[4720]: I0202 10:12:12.780194 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:17 crc kubenswrapper[4720]: I0202 10:12:17.901749 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:12:17 crc kubenswrapper[4720]: I0202 10:12:17.902385 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:12:17 crc kubenswrapper[4720]: I0202 10:12:17.902436 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 10:12:17 crc kubenswrapper[4720]: I0202 10:12:17.903198 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:12:17 crc kubenswrapper[4720]: I0202 10:12:17.903245 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" gracePeriod=600 Feb 02 10:12:18 crc kubenswrapper[4720]: E0202 10:12:18.054762 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:12:18 crc kubenswrapper[4720]: I0202 10:12:18.059760 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" exitCode=0 Feb 02 10:12:18 crc kubenswrapper[4720]: I0202 10:12:18.059800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556"} Feb 02 10:12:18 crc kubenswrapper[4720]: I0202 10:12:18.059841 4720 scope.go:117] "RemoveContainer" containerID="1602c8e4bd83ab9cf2c86cf5c63541135ed09afe54488ac78d195df2566bdebc" Feb 02 10:12:18 crc kubenswrapper[4720]: I0202 10:12:18.587963 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcqwl"] Feb 02 10:12:19 crc kubenswrapper[4720]: I0202 10:12:19.073298 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:12:19 crc kubenswrapper[4720]: E0202 10:12:19.073948 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:12:19 crc kubenswrapper[4720]: I0202 10:12:19.074420 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerStarted","Data":"8f4d85e504afd23e716f37ebf147a9192049ca9761e3abf27f98c6106bba7ca0"} Feb 02 10:12:20 crc kubenswrapper[4720]: I0202 10:12:20.085661 4720 generic.go:334] "Generic (PLEG): container finished" podID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerID="5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe" exitCode=0 Feb 02 10:12:20 crc kubenswrapper[4720]: I0202 10:12:20.085843 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerDied","Data":"5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe"} Feb 02 10:12:21 crc kubenswrapper[4720]: I0202 10:12:21.099501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerStarted","Data":"2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f"} Feb 02 10:12:23 crc kubenswrapper[4720]: I0202 10:12:23.123069 4720 generic.go:334] "Generic (PLEG): container finished" podID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerID="2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f" exitCode=0 Feb 02 10:12:23 crc kubenswrapper[4720]: I0202 10:12:23.123100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerDied","Data":"2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f"} Feb 02 10:12:25 crc kubenswrapper[4720]: I0202 10:12:25.142869 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerStarted","Data":"448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f"} Feb 02 10:12:25 crc kubenswrapper[4720]: I0202 10:12:25.166005 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcqwl" podStartSLOduration=9.727548296 podStartE2EDuration="13.165983908s" podCreationTimestamp="2026-02-02 10:12:12 +0000 UTC" firstStartedPulling="2026-02-02 10:12:20.087610678 +0000 UTC m=+4573.943236234" lastFinishedPulling="2026-02-02 10:12:23.52604629 +0000 UTC m=+4577.381671846" observedRunningTime="2026-02-02 10:12:25.161527599 +0000 UTC m=+4579.017153155" watchObservedRunningTime="2026-02-02 10:12:25.165983908 +0000 UTC m=+4579.021609464" Feb 02 10:12:31 crc kubenswrapper[4720]: I0202 10:12:31.887006 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:12:31 crc kubenswrapper[4720]: E0202 10:12:31.887752 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:12:32 crc kubenswrapper[4720]: I0202 10:12:32.781097 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:32 crc kubenswrapper[4720]: I0202 10:12:32.781457 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:32 crc kubenswrapper[4720]: I0202 10:12:32.845011 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:33 crc kubenswrapper[4720]: I0202 10:12:33.269139 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:33 crc kubenswrapper[4720]: I0202 10:12:33.327753 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcqwl"] Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.235797 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcqwl" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="registry-server" containerID="cri-o://448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f" gracePeriod=2 Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.830436 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.916219 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bg6s\" (UniqueName: \"kubernetes.io/projected/51156a08-e6e9-41b0-91d2-053b1f076a83-kube-api-access-9bg6s\") pod \"51156a08-e6e9-41b0-91d2-053b1f076a83\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.916279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-utilities\") pod \"51156a08-e6e9-41b0-91d2-053b1f076a83\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.916341 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-catalog-content\") pod \"51156a08-e6e9-41b0-91d2-053b1f076a83\" (UID: \"51156a08-e6e9-41b0-91d2-053b1f076a83\") " Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.917208 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-utilities" (OuterVolumeSpecName: "utilities") pod "51156a08-e6e9-41b0-91d2-053b1f076a83" (UID: "51156a08-e6e9-41b0-91d2-053b1f076a83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.918257 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:12:35 crc kubenswrapper[4720]: I0202 10:12:35.923180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51156a08-e6e9-41b0-91d2-053b1f076a83-kube-api-access-9bg6s" (OuterVolumeSpecName: "kube-api-access-9bg6s") pod "51156a08-e6e9-41b0-91d2-053b1f076a83" (UID: "51156a08-e6e9-41b0-91d2-053b1f076a83"). InnerVolumeSpecName "kube-api-access-9bg6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.020731 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bg6s\" (UniqueName: \"kubernetes.io/projected/51156a08-e6e9-41b0-91d2-053b1f076a83-kube-api-access-9bg6s\") on node \"crc\" DevicePath \"\"" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.038471 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51156a08-e6e9-41b0-91d2-053b1f076a83" (UID: "51156a08-e6e9-41b0-91d2-053b1f076a83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.123088 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51156a08-e6e9-41b0-91d2-053b1f076a83-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.248128 4720 generic.go:334] "Generic (PLEG): container finished" podID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerID="448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f" exitCode=0 Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.248185 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqwl" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.248179 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerDied","Data":"448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f"} Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.248392 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqwl" event={"ID":"51156a08-e6e9-41b0-91d2-053b1f076a83","Type":"ContainerDied","Data":"8f4d85e504afd23e716f37ebf147a9192049ca9761e3abf27f98c6106bba7ca0"} Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.248419 4720 scope.go:117] "RemoveContainer" containerID="448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.285268 4720 scope.go:117] "RemoveContainer" containerID="2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.286052 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcqwl"] Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.295860 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcqwl"] Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.316676 4720 scope.go:117] "RemoveContainer" containerID="5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.350365 4720 scope.go:117] "RemoveContainer" containerID="448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f" Feb 02 10:12:36 crc kubenswrapper[4720]: E0202 10:12:36.350969 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f\": container with ID starting with 448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f not found: ID does not exist" containerID="448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.351011 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f"} err="failed to get container status \"448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f\": rpc error: code = NotFound desc = could not find container \"448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f\": container with ID starting with 448b6ba2c2e5c1f3946c120649ac7f5fdc9aea60b38de13bbd6164455c26658f not found: ID does not exist" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.351037 4720 scope.go:117] "RemoveContainer" containerID="2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f" Feb 02 10:12:36 crc kubenswrapper[4720]: E0202 10:12:36.351468 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f\": container with ID starting with 2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f not found: ID does not exist" containerID="2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.351517 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f"} err="failed to get container status \"2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f\": rpc error: code = NotFound desc = could not find container \"2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f\": container with ID starting with 2f56a2da74eb6b2db33ef3fd5f4599343ff3e7a2fe28870c3d4822343d59ff6f not found: ID does not exist" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.351545 4720 scope.go:117] "RemoveContainer" containerID="5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe" Feb 02 10:12:36 crc kubenswrapper[4720]: E0202 10:12:36.351817 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe\": container with ID starting with 5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe not found: ID does not exist" containerID="5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.351847 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe"} err="failed to get container status \"5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe\": rpc error: code = NotFound desc = could not find container \"5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe\": container with ID starting with 5e6536b65dc281a00a5af1e449754f4d8db938e99ef7e4a82854df29a1affffe not found: ID does not exist" Feb 02 10:12:36 crc kubenswrapper[4720]: I0202 10:12:36.913247 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" path="/var/lib/kubelet/pods/51156a08-e6e9-41b0-91d2-053b1f076a83/volumes" Feb 02 10:12:43 crc kubenswrapper[4720]: I0202 10:12:43.887881 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:12:43 crc kubenswrapper[4720]: E0202 10:12:43.888692 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:12:56 crc kubenswrapper[4720]: I0202 10:12:56.897409 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:12:56 crc kubenswrapper[4720]: E0202 10:12:56.898190 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:12:57 crc kubenswrapper[4720]: I0202 10:12:57.474714 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ee3d432-e07b-4dca-867c-7309d890f903" containerID="192f2b7687f86fd49656229b5daf1f6b04d75b72468290db78655f17dabf73e6" exitCode=0 Feb 02 10:12:57 crc kubenswrapper[4720]: I0202 10:12:57.474754 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-925pb" event={"ID":"6ee3d432-e07b-4dca-867c-7309d890f903","Type":"ContainerDied","Data":"192f2b7687f86fd49656229b5daf1f6b04d75b72468290db78655f17dabf73e6"} Feb 02 10:12:58 crc kubenswrapper[4720]: I0202 10:12:58.760251 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:12:58 crc kubenswrapper[4720]: I0202 10:12:58.794499 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zwx74/crc-debug-925pb"] Feb 02 10:12:58 crc kubenswrapper[4720]: I0202 10:12:58.803157 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zwx74/crc-debug-925pb"] Feb 02 10:12:58 crc kubenswrapper[4720]: I0202 10:12:58.924802 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ee3d432-e07b-4dca-867c-7309d890f903-host\") pod \"6ee3d432-e07b-4dca-867c-7309d890f903\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " Feb 02 10:12:58 crc kubenswrapper[4720]: I0202 10:12:58.924926 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2mzm\" (UniqueName: \"kubernetes.io/projected/6ee3d432-e07b-4dca-867c-7309d890f903-kube-api-access-p2mzm\") pod \"6ee3d432-e07b-4dca-867c-7309d890f903\" (UID: \"6ee3d432-e07b-4dca-867c-7309d890f903\") " Feb 02 10:12:58 crc kubenswrapper[4720]: I0202 10:12:58.925643 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee3d432-e07b-4dca-867c-7309d890f903-host" (OuterVolumeSpecName: "host") pod "6ee3d432-e07b-4dca-867c-7309d890f903" (UID: "6ee3d432-e07b-4dca-867c-7309d890f903"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:12:59 crc kubenswrapper[4720]: I0202 10:12:59.027690 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ee3d432-e07b-4dca-867c-7309d890f903-host\") on node \"crc\" DevicePath \"\"" Feb 02 10:12:59 crc kubenswrapper[4720]: I0202 10:12:59.495866 4720 scope.go:117] "RemoveContainer" containerID="192f2b7687f86fd49656229b5daf1f6b04d75b72468290db78655f17dabf73e6" Feb 02 10:12:59 crc kubenswrapper[4720]: I0202 10:12:59.495922 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-925pb" Feb 02 10:12:59 crc kubenswrapper[4720]: I0202 10:12:59.791868 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee3d432-e07b-4dca-867c-7309d890f903-kube-api-access-p2mzm" (OuterVolumeSpecName: "kube-api-access-p2mzm") pod "6ee3d432-e07b-4dca-867c-7309d890f903" (UID: "6ee3d432-e07b-4dca-867c-7309d890f903"). InnerVolumeSpecName "kube-api-access-p2mzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:12:59 crc kubenswrapper[4720]: I0202 10:12:59.889713 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2mzm\" (UniqueName: \"kubernetes.io/projected/6ee3d432-e07b-4dca-867c-7309d890f903-kube-api-access-p2mzm\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.016613 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zwx74/crc-debug-j8snl"] Feb 02 10:13:00 crc kubenswrapper[4720]: E0202 10:13:00.017046 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="registry-server" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017066 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="registry-server" Feb 02 10:13:00 crc kubenswrapper[4720]: E0202 10:13:00.017077 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="extract-utilities" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017083 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="extract-utilities" Feb 02 10:13:00 crc kubenswrapper[4720]: E0202 10:13:00.017093 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="extract-content" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017099 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="extract-content" Feb 02 10:13:00 crc kubenswrapper[4720]: E0202 10:13:00.017115 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee3d432-e07b-4dca-867c-7309d890f903" containerName="container-00" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017122 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee3d432-e07b-4dca-867c-7309d890f903" containerName="container-00" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017328 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee3d432-e07b-4dca-867c-7309d890f903" containerName="container-00" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017339 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="51156a08-e6e9-41b0-91d2-053b1f076a83" containerName="registry-server" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.017983 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.195025 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrld2\" (UniqueName: \"kubernetes.io/projected/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-kube-api-access-jrld2\") pod \"crc-debug-j8snl\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.195073 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-host\") pod \"crc-debug-j8snl\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.297103 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-host\") pod \"crc-debug-j8snl\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.297152 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrld2\" (UniqueName: \"kubernetes.io/projected/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-kube-api-access-jrld2\") pod \"crc-debug-j8snl\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.297299 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-host\") pod \"crc-debug-j8snl\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.759693 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrld2\" (UniqueName: \"kubernetes.io/projected/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-kube-api-access-jrld2\") pod \"crc-debug-j8snl\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.906429 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee3d432-e07b-4dca-867c-7309d890f903" path="/var/lib/kubelet/pods/6ee3d432-e07b-4dca-867c-7309d890f903/volumes" Feb 02 10:13:00 crc kubenswrapper[4720]: I0202 10:13:00.933380 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:01 crc kubenswrapper[4720]: I0202 10:13:01.524541 4720 generic.go:334] "Generic (PLEG): container finished" podID="3a0ccc36-8320-4cbb-a208-b79a163c0d3d" containerID="af332d38c9b8e19b04ef7e5232854180caf94917e698960fdf862452300b5b76" exitCode=0 Feb 02 10:13:01 crc kubenswrapper[4720]: I0202 10:13:01.524711 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-j8snl" event={"ID":"3a0ccc36-8320-4cbb-a208-b79a163c0d3d","Type":"ContainerDied","Data":"af332d38c9b8e19b04ef7e5232854180caf94917e698960fdf862452300b5b76"} Feb 02 10:13:01 crc kubenswrapper[4720]: I0202 10:13:01.524980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-j8snl" event={"ID":"3a0ccc36-8320-4cbb-a208-b79a163c0d3d","Type":"ContainerStarted","Data":"fd5e1bd30eba730d6de3080fd73bf0f8e4aa24e6bfe9f8404691c296429747bb"} Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.673684 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.842134 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrld2\" (UniqueName: \"kubernetes.io/projected/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-kube-api-access-jrld2\") pod \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.842301 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-host\") pod \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\" (UID: \"3a0ccc36-8320-4cbb-a208-b79a163c0d3d\") " Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.842330 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-host" (OuterVolumeSpecName: "host") pod "3a0ccc36-8320-4cbb-a208-b79a163c0d3d" (UID: "3a0ccc36-8320-4cbb-a208-b79a163c0d3d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.842781 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-host\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.862165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-kube-api-access-jrld2" (OuterVolumeSpecName: "kube-api-access-jrld2") pod "3a0ccc36-8320-4cbb-a208-b79a163c0d3d" (UID: "3a0ccc36-8320-4cbb-a208-b79a163c0d3d"). InnerVolumeSpecName "kube-api-access-jrld2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:13:02 crc kubenswrapper[4720]: I0202 10:13:02.944224 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrld2\" (UniqueName: \"kubernetes.io/projected/3a0ccc36-8320-4cbb-a208-b79a163c0d3d-kube-api-access-jrld2\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:03 crc kubenswrapper[4720]: I0202 10:13:03.544344 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-j8snl" event={"ID":"3a0ccc36-8320-4cbb-a208-b79a163c0d3d","Type":"ContainerDied","Data":"fd5e1bd30eba730d6de3080fd73bf0f8e4aa24e6bfe9f8404691c296429747bb"} Feb 02 10:13:03 crc kubenswrapper[4720]: I0202 10:13:03.544387 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5e1bd30eba730d6de3080fd73bf0f8e4aa24e6bfe9f8404691c296429747bb" Feb 02 10:13:03 crc kubenswrapper[4720]: I0202 10:13:03.544438 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-j8snl" Feb 02 10:13:04 crc kubenswrapper[4720]: I0202 10:13:04.294336 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zwx74/crc-debug-j8snl"] Feb 02 10:13:04 crc kubenswrapper[4720]: I0202 10:13:04.304307 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zwx74/crc-debug-j8snl"] Feb 02 10:13:04 crc kubenswrapper[4720]: I0202 10:13:04.900519 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0ccc36-8320-4cbb-a208-b79a163c0d3d" path="/var/lib/kubelet/pods/3a0ccc36-8320-4cbb-a208-b79a163c0d3d/volumes" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.524252 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zwx74/crc-debug-9tqbw"] Feb 02 10:13:05 crc kubenswrapper[4720]: E0202 10:13:05.525633 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0ccc36-8320-4cbb-a208-b79a163c0d3d" containerName="container-00" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.525659 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0ccc36-8320-4cbb-a208-b79a163c0d3d" containerName="container-00" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.527068 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0ccc36-8320-4cbb-a208-b79a163c0d3d" containerName="container-00" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.528918 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.700191 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-host\") pod \"crc-debug-9tqbw\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.700267 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62q9\" (UniqueName: \"kubernetes.io/projected/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-kube-api-access-p62q9\") pod \"crc-debug-9tqbw\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.802840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-host\") pod \"crc-debug-9tqbw\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.802953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p62q9\" (UniqueName: \"kubernetes.io/projected/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-kube-api-access-p62q9\") pod \"crc-debug-9tqbw\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.803007 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-host\") pod \"crc-debug-9tqbw\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.837705 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p62q9\" (UniqueName: \"kubernetes.io/projected/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-kube-api-access-p62q9\") pod \"crc-debug-9tqbw\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: I0202 10:13:05.865243 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:05 crc kubenswrapper[4720]: W0202 10:13:05.900192 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd04a961e_1dcc_4de8_b462_b1b1cf0aaae9.slice/crio-d367ec6f26a2b1d597130f69f0935b1836999e7a9613cfd0891e5c7aa41a697e WatchSource:0}: Error finding container d367ec6f26a2b1d597130f69f0935b1836999e7a9613cfd0891e5c7aa41a697e: Status 404 returned error can't find the container with id d367ec6f26a2b1d597130f69f0935b1836999e7a9613cfd0891e5c7aa41a697e Feb 02 10:13:06 crc kubenswrapper[4720]: I0202 10:13:06.577661 4720 generic.go:334] "Generic (PLEG): container finished" podID="d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" containerID="2ade6c0c441bd3e1ba0af53c413e2fd14aca02c2ef5b11f65a3277a224a07289" exitCode=0 Feb 02 10:13:06 crc kubenswrapper[4720]: I0202 10:13:06.577802 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-9tqbw" event={"ID":"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9","Type":"ContainerDied","Data":"2ade6c0c441bd3e1ba0af53c413e2fd14aca02c2ef5b11f65a3277a224a07289"} Feb 02 10:13:06 crc kubenswrapper[4720]: I0202 10:13:06.578188 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/crc-debug-9tqbw" event={"ID":"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9","Type":"ContainerStarted","Data":"d367ec6f26a2b1d597130f69f0935b1836999e7a9613cfd0891e5c7aa41a697e"} Feb 02 10:13:06 crc kubenswrapper[4720]: I0202 10:13:06.631532 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zwx74/crc-debug-9tqbw"] Feb 02 10:13:06 crc kubenswrapper[4720]: I0202 10:13:06.641799 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zwx74/crc-debug-9tqbw"] Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.719752 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.847342 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p62q9\" (UniqueName: \"kubernetes.io/projected/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-kube-api-access-p62q9\") pod \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.847526 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-host\") pod \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\" (UID: \"d04a961e-1dcc-4de8-b462-b1b1cf0aaae9\") " Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.847719 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-host" (OuterVolumeSpecName: "host") pod "d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" (UID: "d04a961e-1dcc-4de8-b462-b1b1cf0aaae9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.848072 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-host\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.871848 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-kube-api-access-p62q9" (OuterVolumeSpecName: "kube-api-access-p62q9") pod "d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" (UID: "d04a961e-1dcc-4de8-b462-b1b1cf0aaae9"). InnerVolumeSpecName "kube-api-access-p62q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:13:07 crc kubenswrapper[4720]: I0202 10:13:07.949658 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p62q9\" (UniqueName: \"kubernetes.io/projected/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9-kube-api-access-p62q9\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:08 crc kubenswrapper[4720]: I0202 10:13:08.598976 4720 scope.go:117] "RemoveContainer" containerID="2ade6c0c441bd3e1ba0af53c413e2fd14aca02c2ef5b11f65a3277a224a07289" Feb 02 10:13:08 crc kubenswrapper[4720]: I0202 10:13:08.599012 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/crc-debug-9tqbw" Feb 02 10:13:08 crc kubenswrapper[4720]: I0202 10:13:08.888200 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:13:08 crc kubenswrapper[4720]: E0202 10:13:08.889242 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:13:08 crc kubenswrapper[4720]: I0202 10:13:08.916979 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" path="/var/lib/kubelet/pods/d04a961e-1dcc-4de8-b462-b1b1cf0aaae9/volumes" Feb 02 10:13:21 crc kubenswrapper[4720]: I0202 10:13:21.887030 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:13:21 crc kubenswrapper[4720]: E0202 10:13:21.887928 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:13:24 crc kubenswrapper[4720]: I0202 10:13:24.081665 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-754d8f7774-zcmq5_71cd4ff5-a131-4208-9f0c-bc9651093d43/barbican-api/0.log" Feb 02 10:13:24 crc kubenswrapper[4720]: I0202 10:13:24.308870 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-754d8f7774-zcmq5_71cd4ff5-a131-4208-9f0c-bc9651093d43/barbican-api-log/0.log" Feb 02 10:13:24 crc kubenswrapper[4720]: I0202 10:13:24.351228 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5ffcd48446-zlpmv_323388ac-fb46-49d8-9645-a7fa0bf0fbfe/barbican-keystone-listener/0.log" Feb 02 10:13:24 crc kubenswrapper[4720]: I0202 10:13:24.568971 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64767b4cf5-g7ntw_70f17bba-bccc-4cec-92ac-20d50fe48ed8/barbican-worker/0.log" Feb 02 10:13:24 crc kubenswrapper[4720]: I0202 10:13:24.624860 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64767b4cf5-g7ntw_70f17bba-bccc-4cec-92ac-20d50fe48ed8/barbican-worker-log/0.log" Feb 02 10:13:24 crc kubenswrapper[4720]: I0202 10:13:24.839327 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9rbzl_f616d658-9ec0-457b-a76a-fd6035250f16/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.104508 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_10257622-18ee-4e30-9625-328376f9c3f1/ceilometer-notification-agent/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.108384 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_10257622-18ee-4e30-9625-328376f9c3f1/ceilometer-central-agent/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.144287 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_10257622-18ee-4e30-9625-328376f9c3f1/proxy-httpd/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.157675 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5ffcd48446-zlpmv_323388ac-fb46-49d8-9645-a7fa0bf0fbfe/barbican-keystone-listener-log/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.330457 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_10257622-18ee-4e30-9625-328376f9c3f1/sg-core/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.504437 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_2fe71d97-adbd-42c9-91b6-eaf03ad200f0/ceph/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.781298 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1fdd8300-935b-4abd-b4c3-2a3894f613ed/cinder-api/0.log" Feb 02 10:13:25 crc kubenswrapper[4720]: I0202 10:13:25.781762 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1fdd8300-935b-4abd-b4c3-2a3894f613ed/cinder-api-log/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.024716 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4ddf88e6-513d-474a-bf5d-82806004a740/cinder-backup/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.058703 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_4ddf88e6-513d-474a-bf5d-82806004a740/probe/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.065186 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b3fbac84-aac7-4288-a400-7cb5931f2c2a/cinder-scheduler/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.247618 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b3fbac84-aac7-4288-a400-7cb5931f2c2a/probe/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.400407 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c19bbd5c-8368-477b-8014-e1de85c9abb2/probe/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.588424 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xnrmb_a980c334-6351-4282-abd8-5be6adfd3b79/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.771646 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2kdh9_58baee1a-0156-461f-9be3-2a44ffedecdb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:26 crc kubenswrapper[4720]: I0202 10:13:26.924904 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-k9699_dfbef352-9960-44a8-b50d-02a480f008ca/init/0.log" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.136481 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-k9699_dfbef352-9960-44a8-b50d-02a480f008ca/init/0.log" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.203114 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6rcx6"] Feb 02 10:13:27 crc kubenswrapper[4720]: E0202 10:13:27.203624 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" containerName="container-00" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.203648 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" containerName="container-00" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.203933 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04a961e-1dcc-4de8-b462-b1b1cf0aaae9" containerName="container-00" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.205665 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.234981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rcx6"] Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.242334 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-utilities\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.242418 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-catalog-content\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.242471 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7kl\" (UniqueName: \"kubernetes.io/projected/81114cb6-5a2a-4077-a938-5914a6f6cf31-kube-api-access-kc7kl\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.343865 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-utilities\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.343921 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-catalog-content\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.343957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7kl\" (UniqueName: \"kubernetes.io/projected/81114cb6-5a2a-4077-a938-5914a6f6cf31-kube-api-access-kc7kl\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.344675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-utilities\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.344924 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-catalog-content\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.379784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7kl\" (UniqueName: \"kubernetes.io/projected/81114cb6-5a2a-4077-a938-5914a6f6cf31-kube-api-access-kc7kl\") pod \"community-operators-6rcx6\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.478188 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bm9mv_0d48c45d-435e-4bff-947d-8bddd768de55/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.489664 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5bb847fbb7-k9699_dfbef352-9960-44a8-b50d-02a480f008ca/dnsmasq-dns/0.log" Feb 02 10:13:27 crc kubenswrapper[4720]: I0202 10:13:27.526210 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.088052 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1644b005-02e0-41ed-a421-289e79e6968f/glance-httpd/0.log" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.097436 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1644b005-02e0-41ed-a421-289e79e6968f/glance-log/0.log" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.177929 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rcx6"] Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.315328 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dbdee974-abaf-4569-b6d0-e2efe90a53b1/glance-log/0.log" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.323046 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_dbdee974-abaf-4569-b6d0-e2efe90a53b1/glance-httpd/0.log" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.529919 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86d4c4b4d8-gbbkh_8c4ce7a3-3e40-463d-b5f9-95b3352960f2/horizon/0.log" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.634860 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-xh2fp_784ddeb5-955a-4e2e-a5c8-405f97d93cdb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.791232 4720 generic.go:334] "Generic (PLEG): container finished" podID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerID="db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032" exitCode=0 Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.791284 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rcx6" event={"ID":"81114cb6-5a2a-4077-a938-5914a6f6cf31","Type":"ContainerDied","Data":"db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032"} Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.791322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rcx6" event={"ID":"81114cb6-5a2a-4077-a938-5914a6f6cf31","Type":"ContainerStarted","Data":"dc3b647e42677ff5d489924f59fd80a2c6309d9c63d9d35304dbd1030155ea4f"} Feb 02 10:13:28 crc kubenswrapper[4720]: I0202 10:13:28.896396 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-cn2qd_f7dcbadb-c2c9-4dd0-b4a1-2de9e973babe/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.132616 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500441-mkvqn_dff71476-fbc6-40f0-9bbf-f165ad0d6ccb/keystone-cron/0.log" Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.250350 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c19bbd5c-8368-477b-8014-e1de85c9abb2/cinder-volume/0.log" Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.293653 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_17fd5894-4433-498d-8d28-b2fa366949d3/kube-state-metrics/0.log" Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.440617 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86d4c4b4d8-gbbkh_8c4ce7a3-3e40-463d-b5f9-95b3352960f2/horizon-log/0.log" Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.610686 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dnp9l_4ea7861d-22fa-43cc-ad91-d700bd7e025b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.805160 4720 generic.go:334] "Generic (PLEG): container finished" podID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerID="2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48" exitCode=0 Feb 02 10:13:29 crc kubenswrapper[4720]: I0202 10:13:29.805235 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rcx6" event={"ID":"81114cb6-5a2a-4077-a938-5914a6f6cf31","Type":"ContainerDied","Data":"2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48"} Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.138609 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_742e6521-0c7a-4dfe-9c8f-1a086e180d73/manila-scheduler/0.log" Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.146371 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0d5ebc33-71db-45cf-be50-fa0b92d38d7f/manila-api/0.log" Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.203955 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_742e6521-0c7a-4dfe-9c8f-1a086e180d73/probe/0.log" Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.420938 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_07e6a921-0f7c-40b4-9136-549d1cdf45c1/probe/0.log" Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.801027 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_07e6a921-0f7c-40b4-9136-549d1cdf45c1/manila-share/0.log" Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.817082 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rcx6" event={"ID":"81114cb6-5a2a-4077-a938-5914a6f6cf31","Type":"ContainerStarted","Data":"144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be"} Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.838343 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6rcx6" podStartSLOduration=2.446594401 podStartE2EDuration="3.838323065s" podCreationTimestamp="2026-02-02 10:13:27 +0000 UTC" firstStartedPulling="2026-02-02 10:13:28.793745428 +0000 UTC m=+4642.649370984" lastFinishedPulling="2026-02-02 10:13:30.185474092 +0000 UTC m=+4644.041099648" observedRunningTime="2026-02-02 10:13:30.835458794 +0000 UTC m=+4644.691084350" watchObservedRunningTime="2026-02-02 10:13:30.838323065 +0000 UTC m=+4644.693948631" Feb 02 10:13:30 crc kubenswrapper[4720]: I0202 10:13:30.901670 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0d5ebc33-71db-45cf-be50-fa0b92d38d7f/manila-api-log/0.log" Feb 02 10:13:31 crc kubenswrapper[4720]: I0202 10:13:31.505210 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-8wbgt_b9e622f6-37ab-46e2-98f9-475b39bc469a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:31 crc kubenswrapper[4720]: I0202 10:13:31.839193 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dc5779b69-676fs_3287a569-10ab-49e9-bf47-498b14a54b1c/neutron-httpd/0.log" Feb 02 10:13:32 crc kubenswrapper[4720]: I0202 10:13:32.435472 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dc5779b69-676fs_3287a569-10ab-49e9-bf47-498b14a54b1c/neutron-api/0.log" Feb 02 10:13:33 crc kubenswrapper[4720]: I0202 10:13:33.314940 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cd79436b-a659-4d45-89cc-95f627093f00/nova-cell0-conductor-conductor/0.log" Feb 02 10:13:33 crc kubenswrapper[4720]: I0202 10:13:33.481798 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-57f5dcffbd-gvpfb_ec11bd2b-cee2-413f-9a50-a27d03a27fd8/keystone-api/0.log" Feb 02 10:13:33 crc kubenswrapper[4720]: I0202 10:13:33.859354 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c16b1831-a551-4fa5-ba76-5bf2c7bd2782/nova-cell1-conductor-conductor/0.log" Feb 02 10:13:34 crc kubenswrapper[4720]: I0202 10:13:34.170682 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9671844-9042-4f97-8d10-12a7e1794c3e/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 10:13:34 crc kubenswrapper[4720]: I0202 10:13:34.268006 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b/nova-api-log/0.log" Feb 02 10:13:34 crc kubenswrapper[4720]: I0202 10:13:34.409387 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6h4jh_709087d8-ff60-4902-acf4-f4b23ffe4149/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:34 crc kubenswrapper[4720]: I0202 10:13:34.620152 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7f33c374-23ce-4cf0-a453-b63ae0d2cb1a/nova-metadata-log/0.log" Feb 02 10:13:34 crc kubenswrapper[4720]: I0202 10:13:34.887768 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:13:34 crc kubenswrapper[4720]: E0202 10:13:34.888020 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.060433 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ec53d7b9-1299-4d9a-8b7d-0bff8e28ff0b/nova-api-api/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.102563 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29c13267-2f9e-4e1c-b52f-66be31da5155/mysql-bootstrap/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.298344 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1215d777-66de-494f-9017-6d859aa3d120/nova-scheduler-scheduler/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.382944 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29c13267-2f9e-4e1c-b52f-66be31da5155/galera/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.559679 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29c13267-2f9e-4e1c-b52f-66be31da5155/mysql-bootstrap/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.638805 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_289905c2-8b8c-4d85-a9d4-19ac7c9b9b06/mysql-bootstrap/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.852035 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_289905c2-8b8c-4d85-a9d4-19ac7c9b9b06/mysql-bootstrap/0.log" Feb 02 10:13:35 crc kubenswrapper[4720]: I0202 10:13:35.903464 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_289905c2-8b8c-4d85-a9d4-19ac7c9b9b06/galera/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.083787 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f50a6a2b-2c12-435d-801c-f97f65cf36f9/openstackclient/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.187437 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-774qf_57c88c8b-430e-40d7-9598-464d1dbead23/ovn-controller/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.389078 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mgchh_3b570bee-e4d7-4d5a-98a2-939066b0dff4/openstack-network-exporter/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.482129 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7f33c374-23ce-4cf0-a453-b63ae0d2cb1a/nova-metadata-metadata/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.546502 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b979n_35455de2-123c-442f-88de-e3fa878b3c09/ovsdb-server-init/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.747667 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b979n_35455de2-123c-442f-88de-e3fa878b3c09/ovsdb-server-init/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.789202 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b979n_35455de2-123c-442f-88de-e3fa878b3c09/ovsdb-server/0.log" Feb 02 10:13:36 crc kubenswrapper[4720]: I0202 10:13:36.792191 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b979n_35455de2-123c-442f-88de-e3fa878b3c09/ovs-vswitchd/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.500011 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d349fb1c-3289-47a0-a5ec-525740680f69/openstack-network-exporter/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.526312 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.526349 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.536556 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d349fb1c-3289-47a0-a5ec-525740680f69/ovn-northd/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.560165 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rss67_6a454a20-f16c-4627-8c70-65e3ea30a26d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.591182 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.779750 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_871b4d00-52ff-41e8-9e5a-6f1e567dcef5/ovsdbserver-nb/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.820508 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_871b4d00-52ff-41e8-9e5a-6f1e567dcef5/openstack-network-exporter/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.935750 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.954566 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d1e015f3-dd0c-4380-ad73-362c5f1b704f/openstack-network-exporter/0.log" Feb 02 10:13:37 crc kubenswrapper[4720]: I0202 10:13:37.988758 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d1e015f3-dd0c-4380-ad73-362c5f1b704f/ovsdbserver-sb/0.log" Feb 02 10:13:38 crc kubenswrapper[4720]: I0202 10:13:38.010474 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rcx6"] Feb 02 10:13:38 crc kubenswrapper[4720]: I0202 10:13:38.366207 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52efc47f-bb34-4935-9b64-94e52a883272/setup-container/0.log" Feb 02 10:13:38 crc kubenswrapper[4720]: I0202 10:13:38.537652 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52efc47f-bb34-4935-9b64-94e52a883272/setup-container/0.log" Feb 02 10:13:38 crc kubenswrapper[4720]: I0202 10:13:38.565423 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52efc47f-bb34-4935-9b64-94e52a883272/rabbitmq/0.log" Feb 02 10:13:38 crc kubenswrapper[4720]: I0202 10:13:38.598303 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c8cc9866d-t5g2d_a4ab26f9-1b43-4280-a23c-0124dc1cd945/placement-api/0.log" Feb 02 10:13:38 crc kubenswrapper[4720]: I0202 10:13:38.630995 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c8cc9866d-t5g2d_a4ab26f9-1b43-4280-a23c-0124dc1cd945/placement-log/0.log" Feb 02 10:13:39 crc kubenswrapper[4720]: I0202 10:13:39.552975 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5123a4f9-6161-445e-a17c-184cfbe9c4bb/setup-container/0.log" Feb 02 10:13:39 crc kubenswrapper[4720]: I0202 10:13:39.803179 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5123a4f9-6161-445e-a17c-184cfbe9c4bb/setup-container/0.log" Feb 02 10:13:39 crc kubenswrapper[4720]: I0202 10:13:39.831435 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5123a4f9-6161-445e-a17c-184cfbe9c4bb/rabbitmq/0.log" Feb 02 10:13:39 crc kubenswrapper[4720]: I0202 10:13:39.872632 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ppjrv_4c39f840-a7fd-482a-87c1-a2bd895325f1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:39 crc kubenswrapper[4720]: I0202 10:13:39.885949 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6rcx6" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="registry-server" containerID="cri-o://144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be" gracePeriod=2 Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.071214 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-kp2wx_5dc086de-1441-4dc6-b225-843ce650e62c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.371102 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8zjms_d1257ae5-08dc-4977-9268-d988d889a1e3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.474153 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.585128 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-skcc8_7c7deec2-a8b1-445c-8603-c781d4636bac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.618475 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-catalog-content\") pod \"81114cb6-5a2a-4077-a938-5914a6f6cf31\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.618609 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc7kl\" (UniqueName: \"kubernetes.io/projected/81114cb6-5a2a-4077-a938-5914a6f6cf31-kube-api-access-kc7kl\") pod \"81114cb6-5a2a-4077-a938-5914a6f6cf31\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.618646 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-utilities\") pod \"81114cb6-5a2a-4077-a938-5914a6f6cf31\" (UID: \"81114cb6-5a2a-4077-a938-5914a6f6cf31\") " Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.621061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-utilities" (OuterVolumeSpecName: "utilities") pod "81114cb6-5a2a-4077-a938-5914a6f6cf31" (UID: "81114cb6-5a2a-4077-a938-5914a6f6cf31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.632612 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81114cb6-5a2a-4077-a938-5914a6f6cf31-kube-api-access-kc7kl" (OuterVolumeSpecName: "kube-api-access-kc7kl") pod "81114cb6-5a2a-4077-a938-5914a6f6cf31" (UID: "81114cb6-5a2a-4077-a938-5914a6f6cf31"). InnerVolumeSpecName "kube-api-access-kc7kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.680725 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81114cb6-5a2a-4077-a938-5914a6f6cf31" (UID: "81114cb6-5a2a-4077-a938-5914a6f6cf31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.716405 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qvg7b_9728f5ea-ee17-42d8-a297-958b3247e48e/ssh-known-hosts-edpm-deployment/0.log" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.721243 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc7kl\" (UniqueName: \"kubernetes.io/projected/81114cb6-5a2a-4077-a938-5914a6f6cf31-kube-api-access-kc7kl\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.721277 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.721290 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81114cb6-5a2a-4077-a938-5914a6f6cf31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.907044 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rcx6" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.907046 4720 generic.go:334] "Generic (PLEG): container finished" podID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerID="144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be" exitCode=0 Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.907102 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rcx6" event={"ID":"81114cb6-5a2a-4077-a938-5914a6f6cf31","Type":"ContainerDied","Data":"144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be"} Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.907133 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rcx6" event={"ID":"81114cb6-5a2a-4077-a938-5914a6f6cf31","Type":"ContainerDied","Data":"dc3b647e42677ff5d489924f59fd80a2c6309d9c63d9d35304dbd1030155ea4f"} Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.907150 4720 scope.go:117] "RemoveContainer" containerID="144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.927038 4720 scope.go:117] "RemoveContainer" containerID="2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48" Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.942476 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rcx6"] Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.952722 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6rcx6"] Feb 02 10:13:40 crc kubenswrapper[4720]: I0202 10:13:40.954025 4720 scope.go:117] "RemoveContainer" containerID="db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.007048 4720 scope.go:117] "RemoveContainer" containerID="144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be" Feb 02 10:13:41 crc kubenswrapper[4720]: E0202 10:13:41.009218 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be\": container with ID starting with 144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be not found: ID does not exist" containerID="144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.009276 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be"} err="failed to get container status \"144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be\": rpc error: code = NotFound desc = could not find container \"144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be\": container with ID starting with 144519745ea4971f6deb6c48b64b91ffeb8329cdcff0eac8d84856a60abf03be not found: ID does not exist" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.009305 4720 scope.go:117] "RemoveContainer" containerID="2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48" Feb 02 10:13:41 crc kubenswrapper[4720]: E0202 10:13:41.009641 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48\": container with ID starting with 2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48 not found: ID does not exist" containerID="2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.009686 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48"} err="failed to get container status \"2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48\": rpc error: code = NotFound desc = could not find container \"2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48\": container with ID starting with 2220dc8895e7b00263fe551686cc8ec0b798d639702be0e6806700195295df48 not found: ID does not exist" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.009716 4720 scope.go:117] "RemoveContainer" containerID="db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032" Feb 02 10:13:41 crc kubenswrapper[4720]: E0202 10:13:41.011071 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032\": container with ID starting with db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032 not found: ID does not exist" containerID="db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.011188 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032"} err="failed to get container status \"db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032\": rpc error: code = NotFound desc = could not find container \"db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032\": container with ID starting with db5f563eb4f0a7df4769b76b887035cfb94d3e43eb7708f476c78cd2e7a14032 not found: ID does not exist" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.030349 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d89bf9699-kpnnn_1182b131-3e0d-417a-8f50-4e0b98e7635f/proxy-server/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.128256 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v7qgg_f80e41dc-2fd4-4987-9ec7-53addd3b9048/swift-ring-rebalance/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.150704 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d89bf9699-kpnnn_1182b131-3e0d-417a-8f50-4e0b98e7635f/proxy-httpd/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.244078 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/account-auditor/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.340688 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/account-reaper/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.433139 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/account-replicator/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.511708 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/account-server/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.558378 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/container-auditor/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.649183 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/container-server/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.661191 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/container-replicator/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.752489 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/container-updater/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.809931 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/object-auditor/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.906364 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/object-replicator/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.923622 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/object-server/0.log" Feb 02 10:13:41 crc kubenswrapper[4720]: I0202 10:13:41.932158 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/object-expirer/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.041128 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/object-updater/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.092585 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/rsync/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.152492 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_90bae269-30fb-4c0c-8e00-717f68ef2b01/swift-recon-cron/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.361896 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-w4mck_e6295be1-7d41-4b42-a8a6-7b18ff6bb04e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.449925 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_daa51d24-e496-4a32-88c3-89ef00451e74/tempest-tests-tempest-tests-runner/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.551305 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b4b6f688-8a45-4cc3-8677-c83761f14947/test-operator-logs-container/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.721471 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-fqw84_20241116-e310-4877-b6a3-c0c72b2470fd/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 10:13:42 crc kubenswrapper[4720]: I0202 10:13:42.900144 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" path="/var/lib/kubelet/pods/81114cb6-5a2a-4077-a938-5914a6f6cf31/volumes" Feb 02 10:13:49 crc kubenswrapper[4720]: I0202 10:13:49.886409 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:13:49 crc kubenswrapper[4720]: E0202 10:13:49.887213 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:13:54 crc kubenswrapper[4720]: I0202 10:13:54.088333 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8f3a7ecf-2ee4-4f15-9785-bc895935d771/memcached/0.log" Feb 02 10:14:00 crc kubenswrapper[4720]: I0202 10:14:00.887240 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:14:00 crc kubenswrapper[4720]: E0202 10:14:00.888039 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.339757 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-ts2bs_74c9a454-0e13-4b29-89d5-cbfd77d7db21/manager/0.log" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.635470 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-txm2d_0773964a-e514-4efc-8e88-ea5e71d4a7eb/manager/0.log" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.768807 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-2v5h2_2512bb69-cdd5-4288-a023-08271514a5ed/manager/0.log" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.826633 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/util/0.log" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.976192 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/util/0.log" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.995200 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/pull/0.log" Feb 02 10:14:10 crc kubenswrapper[4720]: I0202 10:14:10.998442 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/pull/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.169954 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/util/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.186097 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/extract/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.211643 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e04435823cb0c0fc9296188562163e0e81bbaeb52a92fa7afd8baaba34tsf7l_07ebfa1a-d538-4a12-87dd-cc8658df99a3/pull/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.443842 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-ccx5d_f48341fa-8eb8-49f2-b177-2c10de4db8fd/manager/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.497761 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-bw8tf_13d9ccbd-a49d-4b71-9c76-251ad5309b8d/manager/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.666257 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-42lrb_ae44cd5d-4fe1-4268-b247-d03075fd37b2/manager/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.838580 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-np86c_e72595d8-8a2a-4b75-8d5d-881209734957/manager/0.log" Feb 02 10:14:11 crc kubenswrapper[4720]: I0202 10:14:11.948116 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-k9qvn_98ee1d10-d444-4de0-a20c-99258ae4c5da/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.063116 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-bbsfl_00a9f518-1d32-4029-ab03-024c73526aa6/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.243831 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-99gf8_4dd293f6-9311-41de-8c84-66780a5e7a77/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.294160 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-z6v6h_365a2cb5-0761-452a-a3e9-b19749919661/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.419004 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-8s87h_409368d6-8b01-4aa7-8d28-65c77d3158ab/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.574933 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-m5hl6_bfdd7555-2c9b-4f4f-a25c-289667ea0526/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.635724 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9tgq4_e29b414a-79dc-49f1-bf42-01bb60a090c5/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.777202 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d6pwbf_adbc4332-64c2-4e3d-82de-495f217179a5/manager/0.log" Feb 02 10:14:12 crc kubenswrapper[4720]: I0202 10:14:12.939733 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b57c84fd5-qzpjd_8f295d56-98ca-48ee-a63a-32956f7693f7/operator/0.log" Feb 02 10:14:13 crc kubenswrapper[4720]: I0202 10:14:13.150638 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xpqwh_63c49313-5150-41a6-aa66-501aee8efe41/registry-server/0.log" Feb 02 10:14:13 crc kubenswrapper[4720]: I0202 10:14:13.358361 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-r2h8r_dcd3565d-97bb-4e80-8620-5399b7ab6f2a/manager/0.log" Feb 02 10:14:13 crc kubenswrapper[4720]: I0202 10:14:13.401912 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-v5vdx_0d0b8077-9ce3-47a4-bb23-7b21a8874d1e/manager/0.log" Feb 02 10:14:13 crc kubenswrapper[4720]: I0202 10:14:13.639393 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-g7wsx_53e25cfa-ef34-4ee4-826e-767a4f154f15/operator/0.log" Feb 02 10:14:13 crc kubenswrapper[4720]: I0202 10:14:13.847382 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-r5wzv_e9ad4b83-8b4a-4965-a9c5-b1e1992b4d2e/manager/0.log" Feb 02 10:14:13 crc kubenswrapper[4720]: I0202 10:14:13.886721 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:14:13 crc kubenswrapper[4720]: E0202 10:14:13.887050 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:14:14 crc kubenswrapper[4720]: I0202 10:14:14.031100 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-b4svf_13347ee1-a6a4-435f-a5e5-8c9af5506dd9/manager/0.log" Feb 02 10:14:14 crc kubenswrapper[4720]: I0202 10:14:14.137548 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75d6c7dbc6-wwpdn_813cdc5b-c252-4b55-8d8a-cf0bfde51059/manager/0.log" Feb 02 10:14:14 crc kubenswrapper[4720]: I0202 10:14:14.200051 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-jc9pb_12badb48-0f9b-41a2-930d-7573f8485dcf/manager/0.log" Feb 02 10:14:14 crc kubenswrapper[4720]: I0202 10:14:14.287992 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-5gncs_a80257cd-1bb9-4c20-87d3-ab6741c78b57/manager/0.log" Feb 02 10:14:25 crc kubenswrapper[4720]: I0202 10:14:25.886800 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:14:25 crc kubenswrapper[4720]: E0202 10:14:25.887597 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:14:34 crc kubenswrapper[4720]: I0202 10:14:34.565161 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-tb5vz_7ed9a100-019b-4f35-ab4c-187b087a3e99/control-plane-machine-set-operator/0.log" Feb 02 10:14:34 crc kubenswrapper[4720]: I0202 10:14:34.740562 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xz2ts_4b5090d5-9ae8-4af6-a6b7-a4e29b671585/kube-rbac-proxy/0.log" Feb 02 10:14:34 crc kubenswrapper[4720]: I0202 10:14:34.784455 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xz2ts_4b5090d5-9ae8-4af6-a6b7-a4e29b671585/machine-api-operator/0.log" Feb 02 10:14:36 crc kubenswrapper[4720]: I0202 10:14:36.898341 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:14:36 crc kubenswrapper[4720]: E0202 10:14:36.899395 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:14:47 crc kubenswrapper[4720]: I0202 10:14:47.283162 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7bc8x_1dbc9be1-9930-49c6-a8e1-0767194f295f/cert-manager-controller/0.log" Feb 02 10:14:47 crc kubenswrapper[4720]: I0202 10:14:47.445237 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dsxr2_8cd38fa1-b879-4abd-86e5-3d9fd6847c6a/cert-manager-cainjector/0.log" Feb 02 10:14:47 crc kubenswrapper[4720]: I0202 10:14:47.477331 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-twh55_5e86cac4-2acd-49d8-b01e-ef7becdce359/cert-manager-webhook/0.log" Feb 02 10:14:50 crc kubenswrapper[4720]: I0202 10:14:50.887461 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:14:50 crc kubenswrapper[4720]: E0202 10:14:50.888267 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.169567 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz"] Feb 02 10:15:00 crc kubenswrapper[4720]: E0202 10:15:00.170743 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="extract-content" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.170755 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="extract-content" Feb 02 10:15:00 crc kubenswrapper[4720]: E0202 10:15:00.170779 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="extract-utilities" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.170785 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="extract-utilities" Feb 02 10:15:00 crc kubenswrapper[4720]: E0202 10:15:00.170806 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="registry-server" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.170812 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="registry-server" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.171014 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="81114cb6-5a2a-4077-a938-5914a6f6cf31" containerName="registry-server" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.171641 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.173696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.173945 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.180581 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz"] Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.272838 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8ee293-946d-4332-958d-62556d858092-config-volume\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.272904 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxmj\" (UniqueName: \"kubernetes.io/projected/7d8ee293-946d-4332-958d-62556d858092-kube-api-access-jzxmj\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.273188 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8ee293-946d-4332-958d-62556d858092-secret-volume\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.375322 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8ee293-946d-4332-958d-62556d858092-secret-volume\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.375468 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8ee293-946d-4332-958d-62556d858092-config-volume\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.375492 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzxmj\" (UniqueName: \"kubernetes.io/projected/7d8ee293-946d-4332-958d-62556d858092-kube-api-access-jzxmj\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.376724 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8ee293-946d-4332-958d-62556d858092-config-volume\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.761573 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8ee293-946d-4332-958d-62556d858092-secret-volume\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.761735 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzxmj\" (UniqueName: \"kubernetes.io/projected/7d8ee293-946d-4332-958d-62556d858092-kube-api-access-jzxmj\") pod \"collect-profiles-29500455-q6wsz\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:00 crc kubenswrapper[4720]: I0202 10:15:00.797398 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:01 crc kubenswrapper[4720]: I0202 10:15:01.402635 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz"] Feb 02 10:15:01 crc kubenswrapper[4720]: I0202 10:15:01.650655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" event={"ID":"7d8ee293-946d-4332-958d-62556d858092","Type":"ContainerStarted","Data":"d0c286b399e104b938a1d6ca510af9cc54b273c14bd8683f7c7ae4a1e93b14d5"} Feb 02 10:15:01 crc kubenswrapper[4720]: I0202 10:15:01.651018 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" event={"ID":"7d8ee293-946d-4332-958d-62556d858092","Type":"ContainerStarted","Data":"0510dcddd373e638f7947f8ba8e22bbbfe69b56e8bed0bb94177a061ce7b8961"} Feb 02 10:15:01 crc kubenswrapper[4720]: I0202 10:15:01.887551 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:15:01 crc kubenswrapper[4720]: E0202 10:15:01.887845 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:15:02 crc kubenswrapper[4720]: I0202 10:15:02.111050 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-tvcgq_f088e9b1-46d0-4f11-a561-3bffd75fb297/nmstate-console-plugin/0.log" Feb 02 10:15:02 crc kubenswrapper[4720]: I0202 10:15:02.663719 4720 generic.go:334] "Generic (PLEG): container finished" podID="7d8ee293-946d-4332-958d-62556d858092" containerID="d0c286b399e104b938a1d6ca510af9cc54b273c14bd8683f7c7ae4a1e93b14d5" exitCode=0 Feb 02 10:15:02 crc kubenswrapper[4720]: I0202 10:15:02.664338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" event={"ID":"7d8ee293-946d-4332-958d-62556d858092","Type":"ContainerDied","Data":"d0c286b399e104b938a1d6ca510af9cc54b273c14bd8683f7c7ae4a1e93b14d5"} Feb 02 10:15:02 crc kubenswrapper[4720]: I0202 10:15:02.897689 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zb6sf_09bfc10e-1726-4216-9abf-9f8f17521be8/nmstate-handler/0.log" Feb 02 10:15:02 crc kubenswrapper[4720]: I0202 10:15:02.969742 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4lrz9_78d2d6b5-0eef-4124-946c-e987ac1fbb95/kube-rbac-proxy/0.log" Feb 02 10:15:02 crc kubenswrapper[4720]: I0202 10:15:02.987951 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4lrz9_78d2d6b5-0eef-4124-946c-e987ac1fbb95/nmstate-metrics/0.log" Feb 02 10:15:03 crc kubenswrapper[4720]: I0202 10:15:03.209167 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-dqxbg_e3f926bb-69d8-493a-82e3-93bb3c1446b0/nmstate-webhook/0.log" Feb 02 10:15:03 crc kubenswrapper[4720]: I0202 10:15:03.213009 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cvrx5_0438f049-5f34-4bfa-8491-8477d69b7f3d/nmstate-operator/0.log" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.079485 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.190796 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzxmj\" (UniqueName: \"kubernetes.io/projected/7d8ee293-946d-4332-958d-62556d858092-kube-api-access-jzxmj\") pod \"7d8ee293-946d-4332-958d-62556d858092\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.191300 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8ee293-946d-4332-958d-62556d858092-config-volume\") pod \"7d8ee293-946d-4332-958d-62556d858092\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.191324 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8ee293-946d-4332-958d-62556d858092-secret-volume\") pod \"7d8ee293-946d-4332-958d-62556d858092\" (UID: \"7d8ee293-946d-4332-958d-62556d858092\") " Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.191939 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8ee293-946d-4332-958d-62556d858092-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d8ee293-946d-4332-958d-62556d858092" (UID: "7d8ee293-946d-4332-958d-62556d858092"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.195841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8ee293-946d-4332-958d-62556d858092-kube-api-access-jzxmj" (OuterVolumeSpecName: "kube-api-access-jzxmj") pod "7d8ee293-946d-4332-958d-62556d858092" (UID: "7d8ee293-946d-4332-958d-62556d858092"). InnerVolumeSpecName "kube-api-access-jzxmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.195889 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d8ee293-946d-4332-958d-62556d858092-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d8ee293-946d-4332-958d-62556d858092" (UID: "7d8ee293-946d-4332-958d-62556d858092"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.293331 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzxmj\" (UniqueName: \"kubernetes.io/projected/7d8ee293-946d-4332-958d-62556d858092-kube-api-access-jzxmj\") on node \"crc\" DevicePath \"\"" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.293371 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8ee293-946d-4332-958d-62556d858092-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.293380 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8ee293-946d-4332-958d-62556d858092-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.453571 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5"] Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.461146 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500410-sw6x5"] Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.681475 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" event={"ID":"7d8ee293-946d-4332-958d-62556d858092","Type":"ContainerDied","Data":"0510dcddd373e638f7947f8ba8e22bbbfe69b56e8bed0bb94177a061ce7b8961"} Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.681521 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0510dcddd373e638f7947f8ba8e22bbbfe69b56e8bed0bb94177a061ce7b8961" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.681525 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500455-q6wsz" Feb 02 10:15:04 crc kubenswrapper[4720]: I0202 10:15:04.897898 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee8905a-743c-47e7-87d0-94380429512f" path="/var/lib/kubelet/pods/bee8905a-743c-47e7-87d0-94380429512f/volumes" Feb 02 10:15:13 crc kubenswrapper[4720]: I0202 10:15:13.887444 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:15:13 crc kubenswrapper[4720]: E0202 10:15:13.888365 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:15:20 crc kubenswrapper[4720]: I0202 10:15:20.984356 4720 scope.go:117] "RemoveContainer" containerID="1eeebac1b026f8feb0ef38ddba0df7dff61237ecb5118dc30ebd0158a5b5e73f" Feb 02 10:15:26 crc kubenswrapper[4720]: I0202 10:15:26.894479 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:15:26 crc kubenswrapper[4720]: E0202 10:15:26.898362 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.127815 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8kt65_02e03625-5584-44e1-8fe5-2551b8d05596/kube-rbac-proxy/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.140630 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8kt65_02e03625-5584-44e1-8fe5-2551b8d05596/controller/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.296747 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-frr-files/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.480309 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-frr-files/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.514907 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-reloader/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.542067 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-metrics/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.558676 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-reloader/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.727130 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-frr-files/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.727843 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-reloader/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.741226 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-metrics/0.log" Feb 02 10:15:33 crc kubenswrapper[4720]: I0202 10:15:33.771066 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-metrics/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.166878 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/controller/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.201740 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-frr-files/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.225149 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-metrics/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.227803 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/cp-reloader/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.427094 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/kube-rbac-proxy/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.430399 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/kube-rbac-proxy-frr/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.466616 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/frr-metrics/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.662556 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/reloader/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.672643 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-vmc97_fdebd093-0e66-4e15-b5b4-9052f4f4c487/frr-k8s-webhook-server/0.log" Feb 02 10:15:34 crc kubenswrapper[4720]: I0202 10:15:34.999331 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-86589bcccc-n9w8d_44b51ab2-e087-4a71-84cd-99575451219a/manager/0.log" Feb 02 10:15:35 crc kubenswrapper[4720]: I0202 10:15:35.162691 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7648947864-tvl8z_28f5dbe0-77e9-47e8-bf43-207d6467558d/webhook-server/0.log" Feb 02 10:15:35 crc kubenswrapper[4720]: I0202 10:15:35.367316 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rzstb_e75ccb9d-f65f-40ac-8255-92685e9d3dd3/kube-rbac-proxy/0.log" Feb 02 10:15:36 crc kubenswrapper[4720]: I0202 10:15:36.116570 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rzstb_e75ccb9d-f65f-40ac-8255-92685e9d3dd3/speaker/0.log" Feb 02 10:15:36 crc kubenswrapper[4720]: I0202 10:15:36.414580 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k2llz_0c063cbf-7388-4656-ab8b-0796a145119e/frr/0.log" Feb 02 10:15:41 crc kubenswrapper[4720]: I0202 10:15:41.887043 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:15:41 crc kubenswrapper[4720]: E0202 10:15:41.887829 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:15:50 crc kubenswrapper[4720]: I0202 10:15:50.704812 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/util/0.log" Feb 02 10:15:50 crc kubenswrapper[4720]: I0202 10:15:50.947187 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/util/0.log" Feb 02 10:15:50 crc kubenswrapper[4720]: I0202 10:15:50.980012 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/pull/0.log" Feb 02 10:15:50 crc kubenswrapper[4720]: I0202 10:15:50.986662 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/pull/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.319767 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/util/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.431073 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/extract/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.445009 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckfsh9_e0cc5d55-a9c9-4d80-9650-7eab31776b2c/pull/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.589077 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/util/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.769953 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/util/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.775010 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/pull/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.780164 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/pull/0.log" Feb 02 10:15:51 crc kubenswrapper[4720]: I0202 10:15:51.996979 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/util/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.028940 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/pull/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.059322 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713d5d7l_a38fc737-0c77-43c6-b94a-47d89c49d9c8/extract/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.256640 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/extract-utilities/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.404574 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/extract-content/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.410989 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/extract-content/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.444376 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/extract-utilities/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.623541 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/extract-content/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.649683 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/extract-utilities/0.log" Feb 02 10:15:52 crc kubenswrapper[4720]: I0202 10:15:52.858224 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/extract-utilities/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.090168 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/extract-content/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.179633 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/extract-utilities/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.180447 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/extract-content/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.258625 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b84cw_be069028-7bae-40c2-a12f-780dbf9c4ccc/registry-server/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.381660 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/extract-content/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.417508 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/extract-utilities/0.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.617072 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/2.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.671294 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wm6qb_bd2b2ce4-bf90-4cab-b03c-010e17f20ff5/marketplace-operator/1.log" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.888872 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:15:53 crc kubenswrapper[4720]: E0202 10:15:53.889121 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:15:53 crc kubenswrapper[4720]: I0202 10:15:53.899805 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/extract-utilities/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.050512 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6hp7_55269fe7-fb5f-4d5d-b9f3-b9ddf189ae81/registry-server/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.086993 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/extract-utilities/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.110724 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/extract-content/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.126932 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/extract-content/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.326007 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/extract-content/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.360374 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/extract-utilities/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.534490 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r87rg_0502035c-9982-417c-94f2-73046cbfbbbc/registry-server/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.588216 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/extract-utilities/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.822976 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/extract-content/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.837483 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/extract-utilities/0.log" Feb 02 10:15:54 crc kubenswrapper[4720]: I0202 10:15:54.839840 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/extract-content/0.log" Feb 02 10:15:55 crc kubenswrapper[4720]: I0202 10:15:55.000334 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/extract-content/0.log" Feb 02 10:15:55 crc kubenswrapper[4720]: I0202 10:15:55.016677 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/extract-utilities/0.log" Feb 02 10:15:55 crc kubenswrapper[4720]: I0202 10:15:55.501540 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgrw_f409f361-d210-43c7-a209-3e2cf6678eb1/registry-server/0.log" Feb 02 10:16:07 crc kubenswrapper[4720]: I0202 10:16:07.886555 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:16:07 crc kubenswrapper[4720]: E0202 10:16:07.887396 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:16:21 crc kubenswrapper[4720]: I0202 10:16:21.887033 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:16:21 crc kubenswrapper[4720]: E0202 10:16:21.887716 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:16:36 crc kubenswrapper[4720]: I0202 10:16:36.900261 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:16:36 crc kubenswrapper[4720]: E0202 10:16:36.901149 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:16:48 crc kubenswrapper[4720]: I0202 10:16:48.892357 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:16:48 crc kubenswrapper[4720]: E0202 10:16:48.893198 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:17:00 crc kubenswrapper[4720]: I0202 10:17:00.888489 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:17:00 crc kubenswrapper[4720]: E0202 10:17:00.889151 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:17:11 crc kubenswrapper[4720]: I0202 10:17:11.887322 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:17:11 crc kubenswrapper[4720]: E0202 10:17:11.888242 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8l7nw_openshift-machine-config-operator(0342796d-ac1a-4cfa-8666-1c772eab1ed2)\"" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" Feb 02 10:17:23 crc kubenswrapper[4720]: I0202 10:17:23.889924 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:17:25 crc kubenswrapper[4720]: I0202 10:17:25.060795 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"0c29c90396080c3b6f0eb47fdf6a655e780cf1edc14fd7dcdccb0bee32ce6957"} Feb 02 10:18:16 crc kubenswrapper[4720]: I0202 10:18:16.624106 4720 generic.go:334] "Generic (PLEG): container finished" podID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerID="adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d" exitCode=0 Feb 02 10:18:16 crc kubenswrapper[4720]: I0202 10:18:16.624201 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zwx74/must-gather-488gp" event={"ID":"c1f67a28-54b6-4a3e-bace-22284dc415da","Type":"ContainerDied","Data":"adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d"} Feb 02 10:18:16 crc kubenswrapper[4720]: I0202 10:18:16.625510 4720 scope.go:117] "RemoveContainer" containerID="adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d" Feb 02 10:18:16 crc kubenswrapper[4720]: I0202 10:18:16.815874 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zwx74_must-gather-488gp_c1f67a28-54b6-4a3e-bace-22284dc415da/gather/0.log" Feb 02 10:18:24 crc kubenswrapper[4720]: I0202 10:18:24.985006 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zwx74/must-gather-488gp"] Feb 02 10:18:24 crc kubenswrapper[4720]: I0202 10:18:24.985654 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zwx74/must-gather-488gp" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="copy" containerID="cri-o://ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307" gracePeriod=2 Feb 02 10:18:24 crc kubenswrapper[4720]: I0202 10:18:24.991691 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zwx74/must-gather-488gp"] Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.462392 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zwx74_must-gather-488gp_c1f67a28-54b6-4a3e-bace-22284dc415da/copy/0.log" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.463205 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.601989 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s67q\" (UniqueName: \"kubernetes.io/projected/c1f67a28-54b6-4a3e-bace-22284dc415da-kube-api-access-5s67q\") pod \"c1f67a28-54b6-4a3e-bace-22284dc415da\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.602230 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1f67a28-54b6-4a3e-bace-22284dc415da-must-gather-output\") pod \"c1f67a28-54b6-4a3e-bace-22284dc415da\" (UID: \"c1f67a28-54b6-4a3e-bace-22284dc415da\") " Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.627511 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f67a28-54b6-4a3e-bace-22284dc415da-kube-api-access-5s67q" (OuterVolumeSpecName: "kube-api-access-5s67q") pod "c1f67a28-54b6-4a3e-bace-22284dc415da" (UID: "c1f67a28-54b6-4a3e-bace-22284dc415da"). InnerVolumeSpecName "kube-api-access-5s67q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.704412 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s67q\" (UniqueName: \"kubernetes.io/projected/c1f67a28-54b6-4a3e-bace-22284dc415da-kube-api-access-5s67q\") on node \"crc\" DevicePath \"\"" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.785207 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zwx74_must-gather-488gp_c1f67a28-54b6-4a3e-bace-22284dc415da/copy/0.log" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.786092 4720 generic.go:334] "Generic (PLEG): container finished" podID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerID="ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307" exitCode=143 Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.786159 4720 scope.go:117] "RemoveContainer" containerID="ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.786339 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zwx74/must-gather-488gp" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.817352 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f67a28-54b6-4a3e-bace-22284dc415da-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c1f67a28-54b6-4a3e-bace-22284dc415da" (UID: "c1f67a28-54b6-4a3e-bace-22284dc415da"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.818269 4720 scope.go:117] "RemoveContainer" containerID="adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.903924 4720 scope.go:117] "RemoveContainer" containerID="ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307" Feb 02 10:18:25 crc kubenswrapper[4720]: E0202 10:18:25.904487 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307\": container with ID starting with ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307 not found: ID does not exist" containerID="ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.904536 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307"} err="failed to get container status \"ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307\": rpc error: code = NotFound desc = could not find container \"ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307\": container with ID starting with ad040ec1f7813f9db05b18df9c7f3719a3be8f9e06987d8ee5d53a3afeb3b307 not found: ID does not exist" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.904566 4720 scope.go:117] "RemoveContainer" containerID="adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d" Feb 02 10:18:25 crc kubenswrapper[4720]: E0202 10:18:25.905056 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d\": container with ID starting with adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d not found: ID does not exist" containerID="adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.905096 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d"} err="failed to get container status \"adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d\": rpc error: code = NotFound desc = could not find container \"adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d\": container with ID starting with adcb5bfeb50a33f1aed6f7651b64c36296d7a054179b63fb96e8e3bc3db8a88d not found: ID does not exist" Feb 02 10:18:25 crc kubenswrapper[4720]: I0202 10:18:25.908438 4720 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1f67a28-54b6-4a3e-bace-22284dc415da-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 10:18:26 crc kubenswrapper[4720]: I0202 10:18:26.916428 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" path="/var/lib/kubelet/pods/c1f67a28-54b6-4a3e-bace-22284dc415da/volumes" Feb 02 10:19:21 crc kubenswrapper[4720]: I0202 10:19:21.148181 4720 scope.go:117] "RemoveContainer" containerID="af332d38c9b8e19b04ef7e5232854180caf94917e698960fdf862452300b5b76" Feb 02 10:19:47 crc kubenswrapper[4720]: I0202 10:19:47.902028 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:19:47 crc kubenswrapper[4720]: I0202 10:19:47.902978 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:20:17 crc kubenswrapper[4720]: I0202 10:20:17.902444 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:20:17 crc kubenswrapper[4720]: I0202 10:20:17.902924 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:20:47 crc kubenswrapper[4720]: I0202 10:20:47.902444 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:20:47 crc kubenswrapper[4720]: I0202 10:20:47.903250 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:20:47 crc kubenswrapper[4720]: I0202 10:20:47.903308 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" Feb 02 10:20:47 crc kubenswrapper[4720]: I0202 10:20:47.904107 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c29c90396080c3b6f0eb47fdf6a655e780cf1edc14fd7dcdccb0bee32ce6957"} pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:20:47 crc kubenswrapper[4720]: I0202 10:20:47.904166 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" containerID="cri-o://0c29c90396080c3b6f0eb47fdf6a655e780cf1edc14fd7dcdccb0bee32ce6957" gracePeriod=600 Feb 02 10:20:48 crc kubenswrapper[4720]: I0202 10:20:48.449760 4720 generic.go:334] "Generic (PLEG): container finished" podID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerID="0c29c90396080c3b6f0eb47fdf6a655e780cf1edc14fd7dcdccb0bee32ce6957" exitCode=0 Feb 02 10:20:48 crc kubenswrapper[4720]: I0202 10:20:48.450077 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerDied","Data":"0c29c90396080c3b6f0eb47fdf6a655e780cf1edc14fd7dcdccb0bee32ce6957"} Feb 02 10:20:48 crc kubenswrapper[4720]: I0202 10:20:48.450464 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" event={"ID":"0342796d-ac1a-4cfa-8666-1c772eab1ed2","Type":"ContainerStarted","Data":"016a533ee4a7bc3551f01f65abe8c123fc18d18f347f0c36b204f00939f23f97"} Feb 02 10:20:48 crc kubenswrapper[4720]: I0202 10:20:48.450502 4720 scope.go:117] "RemoveContainer" containerID="cc33e25dd93dfb26b266d881bab3d671471aa830b017c7da97b17ee276eb3556" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.187732 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kbhkh"] Feb 02 10:21:54 crc kubenswrapper[4720]: E0202 10:21:54.188812 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="copy" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.188827 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="copy" Feb 02 10:21:54 crc kubenswrapper[4720]: E0202 10:21:54.188864 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="gather" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.188871 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="gather" Feb 02 10:21:54 crc kubenswrapper[4720]: E0202 10:21:54.188915 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8ee293-946d-4332-958d-62556d858092" containerName="collect-profiles" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.188924 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8ee293-946d-4332-958d-62556d858092" containerName="collect-profiles" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.189144 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8ee293-946d-4332-958d-62556d858092" containerName="collect-profiles" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.189185 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="copy" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.189195 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f67a28-54b6-4a3e-bace-22284dc415da" containerName="gather" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.190859 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.207830 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kbhkh"] Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.339612 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-utilities\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.340189 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsbn\" (UniqueName: \"kubernetes.io/projected/28a00d9b-3da1-410e-bc96-8f49fdf9880b-kube-api-access-qxsbn\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.340330 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-catalog-content\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.442138 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsbn\" (UniqueName: \"kubernetes.io/projected/28a00d9b-3da1-410e-bc96-8f49fdf9880b-kube-api-access-qxsbn\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.442613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-catalog-content\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.442811 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-utilities\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.442995 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-catalog-content\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.443273 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-utilities\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.464724 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsbn\" (UniqueName: \"kubernetes.io/projected/28a00d9b-3da1-410e-bc96-8f49fdf9880b-kube-api-access-qxsbn\") pod \"certified-operators-kbhkh\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:54 crc kubenswrapper[4720]: I0202 10:21:54.515873 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:21:55 crc kubenswrapper[4720]: I0202 10:21:55.103181 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kbhkh"] Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.117023 4720 generic.go:334] "Generic (PLEG): container finished" podID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerID="22a0e83872b7402a20654345d12bf23b63a43ae501b694985b04f98e1015227d" exitCode=0 Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.117114 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerDied","Data":"22a0e83872b7402a20654345d12bf23b63a43ae501b694985b04f98e1015227d"} Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.117742 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerStarted","Data":"03c934d7d97473ac7f4ccc088028dd46e6baa7800775af48533f7c1593fe060a"} Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.120204 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.387230 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hzrxm"] Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.389552 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.403272 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-utilities\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.403485 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchlk\" (UniqueName: \"kubernetes.io/projected/7abead3d-e915-487f-aa19-38807b608b91-kube-api-access-rchlk\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.403561 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-catalog-content\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.414533 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzrxm"] Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.512311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-utilities\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.512360 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-utilities\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.512408 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rchlk\" (UniqueName: \"kubernetes.io/projected/7abead3d-e915-487f-aa19-38807b608b91-kube-api-access-rchlk\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.512430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-catalog-content\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.512716 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-catalog-content\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.569691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rchlk\" (UniqueName: \"kubernetes.io/projected/7abead3d-e915-487f-aa19-38807b608b91-kube-api-access-rchlk\") pod \"redhat-marketplace-hzrxm\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:56 crc kubenswrapper[4720]: I0202 10:21:56.716373 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:21:57 crc kubenswrapper[4720]: I0202 10:21:57.131136 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerStarted","Data":"46f49d5c46e93e272b0715ecdd0b12ab4664e507c495f87049574bb3d2a048a5"} Feb 02 10:21:57 crc kubenswrapper[4720]: I0202 10:21:57.213210 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzrxm"] Feb 02 10:21:57 crc kubenswrapper[4720]: W0202 10:21:57.216627 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7abead3d_e915_487f_aa19_38807b608b91.slice/crio-12d3b5875911e2e5ea571f67ab638fc2415e06c2c0d31f6b2b2a59ac47389df1 WatchSource:0}: Error finding container 12d3b5875911e2e5ea571f67ab638fc2415e06c2c0d31f6b2b2a59ac47389df1: Status 404 returned error can't find the container with id 12d3b5875911e2e5ea571f67ab638fc2415e06c2c0d31f6b2b2a59ac47389df1 Feb 02 10:21:58 crc kubenswrapper[4720]: I0202 10:21:58.145023 4720 generic.go:334] "Generic (PLEG): container finished" podID="7abead3d-e915-487f-aa19-38807b608b91" containerID="8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619" exitCode=0 Feb 02 10:21:58 crc kubenswrapper[4720]: I0202 10:21:58.145078 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerDied","Data":"8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619"} Feb 02 10:21:58 crc kubenswrapper[4720]: I0202 10:21:58.145482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerStarted","Data":"12d3b5875911e2e5ea571f67ab638fc2415e06c2c0d31f6b2b2a59ac47389df1"} Feb 02 10:21:58 crc kubenswrapper[4720]: I0202 10:21:58.147504 4720 generic.go:334] "Generic (PLEG): container finished" podID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerID="46f49d5c46e93e272b0715ecdd0b12ab4664e507c495f87049574bb3d2a048a5" exitCode=0 Feb 02 10:21:58 crc kubenswrapper[4720]: I0202 10:21:58.147531 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerDied","Data":"46f49d5c46e93e272b0715ecdd0b12ab4664e507c495f87049574bb3d2a048a5"} Feb 02 10:22:00 crc kubenswrapper[4720]: I0202 10:22:00.168118 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerStarted","Data":"ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6"} Feb 02 10:22:00 crc kubenswrapper[4720]: I0202 10:22:00.170353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerStarted","Data":"20e8408b5f4c92a8b4430b7ec3c3ea9c7465933c030ee4fe9be9008dd493bd34"} Feb 02 10:22:01 crc kubenswrapper[4720]: I0202 10:22:01.184044 4720 generic.go:334] "Generic (PLEG): container finished" podID="7abead3d-e915-487f-aa19-38807b608b91" containerID="ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6" exitCode=0 Feb 02 10:22:01 crc kubenswrapper[4720]: I0202 10:22:01.185562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerDied","Data":"ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6"} Feb 02 10:22:01 crc kubenswrapper[4720]: I0202 10:22:01.216383 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kbhkh" podStartSLOduration=3.767192423 podStartE2EDuration="7.216364052s" podCreationTimestamp="2026-02-02 10:21:54 +0000 UTC" firstStartedPulling="2026-02-02 10:21:56.119895324 +0000 UTC m=+5149.975520890" lastFinishedPulling="2026-02-02 10:21:59.569066973 +0000 UTC m=+5153.424692519" observedRunningTime="2026-02-02 10:22:00.235248021 +0000 UTC m=+5154.090873607" watchObservedRunningTime="2026-02-02 10:22:01.216364052 +0000 UTC m=+5155.071989608" Feb 02 10:22:02 crc kubenswrapper[4720]: I0202 10:22:02.196260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerStarted","Data":"86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552"} Feb 02 10:22:02 crc kubenswrapper[4720]: I0202 10:22:02.220778 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hzrxm" podStartSLOduration=2.7141664260000002 podStartE2EDuration="6.220755748s" podCreationTimestamp="2026-02-02 10:21:56 +0000 UTC" firstStartedPulling="2026-02-02 10:21:58.147337489 +0000 UTC m=+5152.002963045" lastFinishedPulling="2026-02-02 10:22:01.653926811 +0000 UTC m=+5155.509552367" observedRunningTime="2026-02-02 10:22:02.212980809 +0000 UTC m=+5156.068606365" watchObservedRunningTime="2026-02-02 10:22:02.220755748 +0000 UTC m=+5156.076381304" Feb 02 10:22:04 crc kubenswrapper[4720]: I0202 10:22:04.516686 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:22:04 crc kubenswrapper[4720]: I0202 10:22:04.517250 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:22:04 crc kubenswrapper[4720]: I0202 10:22:04.575831 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:22:05 crc kubenswrapper[4720]: I0202 10:22:05.265214 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:22:05 crc kubenswrapper[4720]: I0202 10:22:05.982309 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kbhkh"] Feb 02 10:22:06 crc kubenswrapper[4720]: I0202 10:22:06.717044 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:22:06 crc kubenswrapper[4720]: I0202 10:22:06.717945 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:22:06 crc kubenswrapper[4720]: I0202 10:22:06.784906 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:22:07 crc kubenswrapper[4720]: I0202 10:22:07.245664 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kbhkh" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="registry-server" containerID="cri-o://20e8408b5f4c92a8b4430b7ec3c3ea9c7465933c030ee4fe9be9008dd493bd34" gracePeriod=2 Feb 02 10:22:07 crc kubenswrapper[4720]: I0202 10:22:07.646503 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.270588 4720 generic.go:334] "Generic (PLEG): container finished" podID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerID="20e8408b5f4c92a8b4430b7ec3c3ea9c7465933c030ee4fe9be9008dd493bd34" exitCode=0 Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.272830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerDied","Data":"20e8408b5f4c92a8b4430b7ec3c3ea9c7465933c030ee4fe9be9008dd493bd34"} Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.386653 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzrxm"] Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.452739 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.489839 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-catalog-content\") pod \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.490255 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxsbn\" (UniqueName: \"kubernetes.io/projected/28a00d9b-3da1-410e-bc96-8f49fdf9880b-kube-api-access-qxsbn\") pod \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.490347 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-utilities\") pod \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\" (UID: \"28a00d9b-3da1-410e-bc96-8f49fdf9880b\") " Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.491056 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-utilities" (OuterVolumeSpecName: "utilities") pod "28a00d9b-3da1-410e-bc96-8f49fdf9880b" (UID: "28a00d9b-3da1-410e-bc96-8f49fdf9880b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.491842 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.498191 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a00d9b-3da1-410e-bc96-8f49fdf9880b-kube-api-access-qxsbn" (OuterVolumeSpecName: "kube-api-access-qxsbn") pod "28a00d9b-3da1-410e-bc96-8f49fdf9880b" (UID: "28a00d9b-3da1-410e-bc96-8f49fdf9880b"). InnerVolumeSpecName "kube-api-access-qxsbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.540740 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28a00d9b-3da1-410e-bc96-8f49fdf9880b" (UID: "28a00d9b-3da1-410e-bc96-8f49fdf9880b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.602338 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a00d9b-3da1-410e-bc96-8f49fdf9880b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:22:08 crc kubenswrapper[4720]: I0202 10:22:08.602490 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxsbn\" (UniqueName: \"kubernetes.io/projected/28a00d9b-3da1-410e-bc96-8f49fdf9880b-kube-api-access-qxsbn\") on node \"crc\" DevicePath \"\"" Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.287172 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbhkh" event={"ID":"28a00d9b-3da1-410e-bc96-8f49fdf9880b","Type":"ContainerDied","Data":"03c934d7d97473ac7f4ccc088028dd46e6baa7800775af48533f7c1593fe060a"} Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.287260 4720 scope.go:117] "RemoveContainer" containerID="20e8408b5f4c92a8b4430b7ec3c3ea9c7465933c030ee4fe9be9008dd493bd34" Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.287265 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbhkh" Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.325468 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kbhkh"] Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.329717 4720 scope.go:117] "RemoveContainer" containerID="46f49d5c46e93e272b0715ecdd0b12ab4664e507c495f87049574bb3d2a048a5" Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.335652 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kbhkh"] Feb 02 10:22:09 crc kubenswrapper[4720]: I0202 10:22:09.357268 4720 scope.go:117] "RemoveContainer" containerID="22a0e83872b7402a20654345d12bf23b63a43ae501b694985b04f98e1015227d" Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.308596 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hzrxm" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="registry-server" containerID="cri-o://86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552" gracePeriod=2 Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.853095 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.899106 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" path="/var/lib/kubelet/pods/28a00d9b-3da1-410e-bc96-8f49fdf9880b/volumes" Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.947853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-catalog-content\") pod \"7abead3d-e915-487f-aa19-38807b608b91\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.948430 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rchlk\" (UniqueName: \"kubernetes.io/projected/7abead3d-e915-487f-aa19-38807b608b91-kube-api-access-rchlk\") pod \"7abead3d-e915-487f-aa19-38807b608b91\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.948511 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-utilities\") pod \"7abead3d-e915-487f-aa19-38807b608b91\" (UID: \"7abead3d-e915-487f-aa19-38807b608b91\") " Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.949426 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-utilities" (OuterVolumeSpecName: "utilities") pod "7abead3d-e915-487f-aa19-38807b608b91" (UID: "7abead3d-e915-487f-aa19-38807b608b91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.955340 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7abead3d-e915-487f-aa19-38807b608b91-kube-api-access-rchlk" (OuterVolumeSpecName: "kube-api-access-rchlk") pod "7abead3d-e915-487f-aa19-38807b608b91" (UID: "7abead3d-e915-487f-aa19-38807b608b91"). InnerVolumeSpecName "kube-api-access-rchlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:22:10 crc kubenswrapper[4720]: I0202 10:22:10.969947 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7abead3d-e915-487f-aa19-38807b608b91" (UID: "7abead3d-e915-487f-aa19-38807b608b91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.050660 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rchlk\" (UniqueName: \"kubernetes.io/projected/7abead3d-e915-487f-aa19-38807b608b91-kube-api-access-rchlk\") on node \"crc\" DevicePath \"\"" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.050699 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.050710 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abead3d-e915-487f-aa19-38807b608b91-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.322656 4720 generic.go:334] "Generic (PLEG): container finished" podID="7abead3d-e915-487f-aa19-38807b608b91" containerID="86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552" exitCode=0 Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.322701 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerDied","Data":"86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552"} Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.322772 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hzrxm" event={"ID":"7abead3d-e915-487f-aa19-38807b608b91","Type":"ContainerDied","Data":"12d3b5875911e2e5ea571f67ab638fc2415e06c2c0d31f6b2b2a59ac47389df1"} Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.322786 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hzrxm" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.322796 4720 scope.go:117] "RemoveContainer" containerID="86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.348069 4720 scope.go:117] "RemoveContainer" containerID="ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.379873 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzrxm"] Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.393081 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hzrxm"] Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.400508 4720 scope.go:117] "RemoveContainer" containerID="8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.449197 4720 scope.go:117] "RemoveContainer" containerID="86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552" Feb 02 10:22:11 crc kubenswrapper[4720]: E0202 10:22:11.449654 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552\": container with ID starting with 86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552 not found: ID does not exist" containerID="86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.449710 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552"} err="failed to get container status \"86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552\": rpc error: code = NotFound desc = could not find container \"86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552\": container with ID starting with 86b1b7c5811857fb3b0cb73b485250a7a43b509dc314040f08933a7fffea9552 not found: ID does not exist" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.449733 4720 scope.go:117] "RemoveContainer" containerID="ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6" Feb 02 10:22:11 crc kubenswrapper[4720]: E0202 10:22:11.450167 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6\": container with ID starting with ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6 not found: ID does not exist" containerID="ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.450240 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6"} err="failed to get container status \"ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6\": rpc error: code = NotFound desc = could not find container \"ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6\": container with ID starting with ae0871c4b112f618f9c56cbbc1aeb47244f5611f9b28f16f1c812e9ea7ea9ac6 not found: ID does not exist" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.450280 4720 scope.go:117] "RemoveContainer" containerID="8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619" Feb 02 10:22:11 crc kubenswrapper[4720]: E0202 10:22:11.450753 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619\": container with ID starting with 8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619 not found: ID does not exist" containerID="8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619" Feb 02 10:22:11 crc kubenswrapper[4720]: I0202 10:22:11.450786 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619"} err="failed to get container status \"8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619\": rpc error: code = NotFound desc = could not find container \"8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619\": container with ID starting with 8e59d23a1cf8e27c3775ac2091f0c481fa881c98d85285adb50228aaf98d4619 not found: ID does not exist" Feb 02 10:22:12 crc kubenswrapper[4720]: I0202 10:22:12.897126 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7abead3d-e915-487f-aa19-38807b608b91" path="/var/lib/kubelet/pods/7abead3d-e915-487f-aa19-38807b608b91/volumes" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.948166 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m2596"] Feb 02 10:22:48 crc kubenswrapper[4720]: E0202 10:22:48.949103 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="extract-utilities" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949117 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="extract-utilities" Feb 02 10:22:48 crc kubenswrapper[4720]: E0202 10:22:48.949132 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="extract-content" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949139 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="extract-content" Feb 02 10:22:48 crc kubenswrapper[4720]: E0202 10:22:48.949156 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="extract-utilities" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949164 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="extract-utilities" Feb 02 10:22:48 crc kubenswrapper[4720]: E0202 10:22:48.949179 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="registry-server" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949186 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="registry-server" Feb 02 10:22:48 crc kubenswrapper[4720]: E0202 10:22:48.949222 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="extract-content" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949230 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="extract-content" Feb 02 10:22:48 crc kubenswrapper[4720]: E0202 10:22:48.949243 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="registry-server" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949251 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="registry-server" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949488 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7abead3d-e915-487f-aa19-38807b608b91" containerName="registry-server" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.949503 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a00d9b-3da1-410e-bc96-8f49fdf9880b" containerName="registry-server" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.951184 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:48 crc kubenswrapper[4720]: I0202 10:22:48.983218 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2596"] Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.092385 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-utilities\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.092505 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-catalog-content\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.092751 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phcr\" (UniqueName: \"kubernetes.io/projected/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-kube-api-access-2phcr\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.194944 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-utilities\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.195032 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-catalog-content\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.195113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2phcr\" (UniqueName: \"kubernetes.io/projected/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-kube-api-access-2phcr\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.195467 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-utilities\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.195755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-catalog-content\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.220378 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2phcr\" (UniqueName: \"kubernetes.io/projected/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-kube-api-access-2phcr\") pod \"redhat-operators-m2596\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.281915 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:49 crc kubenswrapper[4720]: I0202 10:22:49.760403 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2596"] Feb 02 10:22:50 crc kubenswrapper[4720]: I0202 10:22:50.744693 4720 generic.go:334] "Generic (PLEG): container finished" podID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerID="95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484" exitCode=0 Feb 02 10:22:50 crc kubenswrapper[4720]: I0202 10:22:50.744822 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerDied","Data":"95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484"} Feb 02 10:22:50 crc kubenswrapper[4720]: I0202 10:22:50.745166 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerStarted","Data":"dadc83ebc58a1acf249978a8359faccfa581beb5bd45c776995674c23fdd4320"} Feb 02 10:22:51 crc kubenswrapper[4720]: I0202 10:22:51.757020 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerStarted","Data":"5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a"} Feb 02 10:22:52 crc kubenswrapper[4720]: I0202 10:22:52.768014 4720 generic.go:334] "Generic (PLEG): container finished" podID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerID="5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a" exitCode=0 Feb 02 10:22:52 crc kubenswrapper[4720]: I0202 10:22:52.768092 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerDied","Data":"5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a"} Feb 02 10:22:54 crc kubenswrapper[4720]: I0202 10:22:54.788114 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerStarted","Data":"808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465"} Feb 02 10:22:54 crc kubenswrapper[4720]: I0202 10:22:54.823171 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m2596" podStartSLOduration=4.4111311650000005 podStartE2EDuration="6.823146724s" podCreationTimestamp="2026-02-02 10:22:48 +0000 UTC" firstStartedPulling="2026-02-02 10:22:50.747285575 +0000 UTC m=+5204.602911151" lastFinishedPulling="2026-02-02 10:22:53.159301154 +0000 UTC m=+5207.014926710" observedRunningTime="2026-02-02 10:22:54.81102649 +0000 UTC m=+5208.666652056" watchObservedRunningTime="2026-02-02 10:22:54.823146724 +0000 UTC m=+5208.678772280" Feb 02 10:22:59 crc kubenswrapper[4720]: I0202 10:22:59.283377 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:22:59 crc kubenswrapper[4720]: I0202 10:22:59.284104 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:23:00 crc kubenswrapper[4720]: I0202 10:23:00.342192 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m2596" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="registry-server" probeResult="failure" output=< Feb 02 10:23:00 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Feb 02 10:23:00 crc kubenswrapper[4720]: > Feb 02 10:23:09 crc kubenswrapper[4720]: I0202 10:23:09.348061 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:23:09 crc kubenswrapper[4720]: I0202 10:23:09.425175 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:23:09 crc kubenswrapper[4720]: I0202 10:23:09.602176 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m2596"] Feb 02 10:23:10 crc kubenswrapper[4720]: I0202 10:23:10.944354 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m2596" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="registry-server" containerID="cri-o://808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465" gracePeriod=2 Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.514719 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.575248 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-catalog-content\") pod \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.575459 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-utilities\") pod \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.575992 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2phcr\" (UniqueName: \"kubernetes.io/projected/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-kube-api-access-2phcr\") pod \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\" (UID: \"48fb8b1a-ee27-4ea0-9a94-a8d5082da095\") " Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.576990 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-utilities" (OuterVolumeSpecName: "utilities") pod "48fb8b1a-ee27-4ea0-9a94-a8d5082da095" (UID: "48fb8b1a-ee27-4ea0-9a94-a8d5082da095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.578481 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.582721 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-kube-api-access-2phcr" (OuterVolumeSpecName: "kube-api-access-2phcr") pod "48fb8b1a-ee27-4ea0-9a94-a8d5082da095" (UID: "48fb8b1a-ee27-4ea0-9a94-a8d5082da095"). InnerVolumeSpecName "kube-api-access-2phcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.680732 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2phcr\" (UniqueName: \"kubernetes.io/projected/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-kube-api-access-2phcr\") on node \"crc\" DevicePath \"\"" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.709622 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48fb8b1a-ee27-4ea0-9a94-a8d5082da095" (UID: "48fb8b1a-ee27-4ea0-9a94-a8d5082da095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.782550 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb8b1a-ee27-4ea0-9a94-a8d5082da095-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.963770 4720 generic.go:334] "Generic (PLEG): container finished" podID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerID="808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465" exitCode=0 Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.963840 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerDied","Data":"808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465"} Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.963906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2596" event={"ID":"48fb8b1a-ee27-4ea0-9a94-a8d5082da095","Type":"ContainerDied","Data":"dadc83ebc58a1acf249978a8359faccfa581beb5bd45c776995674c23fdd4320"} Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.963936 4720 scope.go:117] "RemoveContainer" containerID="808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465" Feb 02 10:23:11 crc kubenswrapper[4720]: I0202 10:23:11.963987 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2596" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.010499 4720 scope.go:117] "RemoveContainer" containerID="5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.022911 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m2596"] Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.033843 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m2596"] Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.047172 4720 scope.go:117] "RemoveContainer" containerID="95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.085152 4720 scope.go:117] "RemoveContainer" containerID="808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465" Feb 02 10:23:12 crc kubenswrapper[4720]: E0202 10:23:12.087906 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465\": container with ID starting with 808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465 not found: ID does not exist" containerID="808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.087978 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465"} err="failed to get container status \"808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465\": rpc error: code = NotFound desc = could not find container \"808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465\": container with ID starting with 808992c364bf76305cb3b1369bf9f7a6226ce564345494244c202ca5b2635465 not found: ID does not exist" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.088018 4720 scope.go:117] "RemoveContainer" containerID="5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a" Feb 02 10:23:12 crc kubenswrapper[4720]: E0202 10:23:12.088699 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a\": container with ID starting with 5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a not found: ID does not exist" containerID="5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.088773 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a"} err="failed to get container status \"5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a\": rpc error: code = NotFound desc = could not find container \"5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a\": container with ID starting with 5d688c893e7ca6fa895781f02f9506db3d791025123074071572f526ed0ac50a not found: ID does not exist" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.088823 4720 scope.go:117] "RemoveContainer" containerID="95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484" Feb 02 10:23:12 crc kubenswrapper[4720]: E0202 10:23:12.089294 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484\": container with ID starting with 95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484 not found: ID does not exist" containerID="95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.089339 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484"} err="failed to get container status \"95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484\": rpc error: code = NotFound desc = could not find container \"95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484\": container with ID starting with 95c368bf528f6789c33325072b9cd497edcec8068e0643f41188446058e16484 not found: ID does not exist" Feb 02 10:23:12 crc kubenswrapper[4720]: I0202 10:23:12.902529 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" path="/var/lib/kubelet/pods/48fb8b1a-ee27-4ea0-9a94-a8d5082da095/volumes" Feb 02 10:23:17 crc kubenswrapper[4720]: I0202 10:23:17.901798 4720 patch_prober.go:28] interesting pod/machine-config-daemon-8l7nw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:23:17 crc kubenswrapper[4720]: I0202 10:23:17.904909 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8l7nw" podUID="0342796d-ac1a-4cfa-8666-1c772eab1ed2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.776786 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlvvj"] Feb 02 10:23:40 crc kubenswrapper[4720]: E0202 10:23:40.778104 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="extract-content" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.778118 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="extract-content" Feb 02 10:23:40 crc kubenswrapper[4720]: E0202 10:23:40.778136 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="registry-server" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.778142 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="registry-server" Feb 02 10:23:40 crc kubenswrapper[4720]: E0202 10:23:40.778163 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="extract-utilities" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.778170 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="extract-utilities" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.778346 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fb8b1a-ee27-4ea0-9a94-a8d5082da095" containerName="registry-server" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.780085 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.797145 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlvvj"] Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.824325 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49mz\" (UniqueName: \"kubernetes.io/projected/73a9ea2b-d226-41a4-8895-31467d64b8fd-kube-api-access-m49mz\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.824513 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a9ea2b-d226-41a4-8895-31467d64b8fd-catalog-content\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.824596 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a9ea2b-d226-41a4-8895-31467d64b8fd-utilities\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.926780 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a9ea2b-d226-41a4-8895-31467d64b8fd-catalog-content\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.926910 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a9ea2b-d226-41a4-8895-31467d64b8fd-utilities\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.926951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49mz\" (UniqueName: \"kubernetes.io/projected/73a9ea2b-d226-41a4-8895-31467d64b8fd-kube-api-access-m49mz\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.928397 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a9ea2b-d226-41a4-8895-31467d64b8fd-catalog-content\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:40 crc kubenswrapper[4720]: I0202 10:23:40.928610 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a9ea2b-d226-41a4-8895-31467d64b8fd-utilities\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:41 crc kubenswrapper[4720]: I0202 10:23:41.066930 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49mz\" (UniqueName: \"kubernetes.io/projected/73a9ea2b-d226-41a4-8895-31467d64b8fd-kube-api-access-m49mz\") pod \"community-operators-tlvvj\" (UID: \"73a9ea2b-d226-41a4-8895-31467d64b8fd\") " pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:41 crc kubenswrapper[4720]: I0202 10:23:41.107471 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlvvj" Feb 02 10:23:41 crc kubenswrapper[4720]: I0202 10:23:41.655267 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlvvj"] Feb 02 10:23:42 crc kubenswrapper[4720]: I0202 10:23:42.248592 4720 generic.go:334] "Generic (PLEG): container finished" podID="73a9ea2b-d226-41a4-8895-31467d64b8fd" containerID="2a84fb67f12f84d05a17c90f49491c46dec4ce3c421a993f0d8f9208f7c310ae" exitCode=0 Feb 02 10:23:42 crc kubenswrapper[4720]: I0202 10:23:42.248637 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlvvj" event={"ID":"73a9ea2b-d226-41a4-8895-31467d64b8fd","Type":"ContainerDied","Data":"2a84fb67f12f84d05a17c90f49491c46dec4ce3c421a993f0d8f9208f7c310ae"} Feb 02 10:23:42 crc kubenswrapper[4720]: I0202 10:23:42.248994 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlvvj" event={"ID":"73a9ea2b-d226-41a4-8895-31467d64b8fd","Type":"ContainerStarted","Data":"a31d64da844b67fcc388307d2f8c1b100eb96cbe4ce60e1617d51030cc7fdeb1"} Feb 02 10:23:44 crc kubenswrapper[4720]: I0202 10:23:44.271608 4720 generic.go:334] "Generic (PLEG): container finished" podID="73a9ea2b-d226-41a4-8895-31467d64b8fd" containerID="41af2b4336ebdfbde6f4e7881b44a9b5cd6e795b46b5ceb990c4c7474ebdadec" exitCode=0 Feb 02 10:23:44 crc kubenswrapper[4720]: I0202 10:23:44.271681 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlvvj" event={"ID":"73a9ea2b-d226-41a4-8895-31467d64b8fd","Type":"ContainerDied","Data":"41af2b4336ebdfbde6f4e7881b44a9b5cd6e795b46b5ceb990c4c7474ebdadec"} Feb 02 10:23:45 crc kubenswrapper[4720]: I0202 10:23:45.282096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlvvj" event={"ID":"73a9ea2b-d226-41a4-8895-31467d64b8fd","Type":"ContainerStarted","Data":"fe2009fee7f932eb67c5de075ff4685f687018370cd86fc92f33eaee884b8903"} Feb 02 10:23:45 crc kubenswrapper[4720]: I0202 10:23:45.313734 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlvvj" podStartSLOduration=2.847267758 podStartE2EDuration="5.313708817s" podCreationTimestamp="2026-02-02 10:23:40 +0000 UTC" firstStartedPulling="2026-02-02 10:23:42.252844272 +0000 UTC m=+5256.108469828" lastFinishedPulling="2026-02-02 10:23:44.719285331 +0000 UTC m=+5258.574910887" observedRunningTime="2026-02-02 10:23:45.304679848 +0000 UTC m=+5259.160305414" watchObservedRunningTime="2026-02-02 10:23:45.313708817 +0000 UTC m=+5259.169334383" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140075474024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140075475017372 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140062722016504 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140062723015455 5ustar corecore